./data/MUCAC/CelebAMask-HQ/CelebA-HQ-img
./data/MUCAC/CelebAMask-HQ/CelebA-HQ-img

📌 S Retain class distribution for seed 6:
Class 0: 5284
Class 1: 4210

📌 S Forget class distribution for seed 6:
Class 0: 527
Class 1: 527

📊 Updated class distribution:
Retain set:
  Class 0: 5415
  Class 1: 4341
Forget set:
  Class 0: 396
  Class 1: 396
./data/MUCAC/CelebAMask-HQ/CelebA-HQ-img
./data/MUCAC/CelebAMask-HQ/CelebA-HQ-img
⚠️ Warning: Retain train loader may not be shuffled.
Training Epoch: 1 [256/9756]	Loss: 0.7112	LR: 0.000000
Training Epoch: 1 [512/9756]	Loss: 0.7266	LR: 0.002564
Training Epoch: 1 [768/9756]	Loss: 0.6952	LR: 0.005128
Training Epoch: 1 [1024/9756]	Loss: 0.7526	LR: 0.007692
Training Epoch: 1 [1280/9756]	Loss: 0.7492	LR: 0.010256
Training Epoch: 1 [1536/9756]	Loss: 0.7307	LR: 0.012821
Training Epoch: 1 [1792/9756]	Loss: 0.7119	LR: 0.015385
Training Epoch: 1 [2048/9756]	Loss: 0.6920	LR: 0.017949
Training Epoch: 1 [2304/9756]	Loss: 0.6817	LR: 0.020513
Training Epoch: 1 [2560/9756]	Loss: 0.7569	LR: 0.023077
Training Epoch: 1 [2816/9756]	Loss: 0.7112	LR: 0.025641
Training Epoch: 1 [3072/9756]	Loss: 0.7427	LR: 0.028205
Training Epoch: 1 [3328/9756]	Loss: 0.7921	LR: 0.030769
Training Epoch: 1 [3584/9756]	Loss: 0.6876	LR: 0.033333
Training Epoch: 1 [3840/9756]	Loss: 0.8740	LR: 0.035897
Training Epoch: 1 [4096/9756]	Loss: 2.2759	LR: 0.038462
Training Epoch: 1 [4352/9756]	Loss: 1.1973	LR: 0.041026
Training Epoch: 1 [4608/9756]	Loss: 0.8521	LR: 0.043590
Training Epoch: 1 [4864/9756]	Loss: 0.8253	LR: 0.046154
Training Epoch: 1 [5120/9756]	Loss: 0.7574	LR: 0.048718
Training Epoch: 1 [5376/9756]	Loss: 0.8303	LR: 0.051282
Training Epoch: 1 [5632/9756]	Loss: 0.8235	LR: 0.053846
Training Epoch: 1 [5888/9756]	Loss: 0.8405	LR: 0.056410
Training Epoch: 1 [6144/9756]	Loss: 0.7701	LR: 0.058974
Training Epoch: 1 [6400/9756]	Loss: 0.6759	LR: 0.061538
Training Epoch: 1 [6656/9756]	Loss: 0.7014	LR: 0.064103
Training Epoch: 1 [6912/9756]	Loss: 0.7211	LR: 0.066667
Training Epoch: 1 [7168/9756]	Loss: 0.6649	LR: 0.069231
Training Epoch: 1 [7424/9756]	Loss: 0.7146	LR: 0.071795
Training Epoch: 1 [7680/9756]	Loss: 0.7290	LR: 0.074359
Training Epoch: 1 [7936/9756]	Loss: 0.7194	LR: 0.076923
Training Epoch: 1 [8192/9756]	Loss: 0.7877	LR: 0.079487
Training Epoch: 1 [8448/9756]	Loss: 0.7554	LR: 0.082051
Training Epoch: 1 [8704/9756]	Loss: 0.9046	LR: 0.084615
Training Epoch: 1 [8960/9756]	Loss: 0.7802	LR: 0.087179
Training Epoch: 1 [9216/9756]	Loss: 0.7487	LR: 0.089744
Training Epoch: 1 [9472/9756]	Loss: 0.9396	LR: 0.092308
Training Epoch: 1 [9728/9756]	Loss: 0.6911	LR: 0.094872
Training Epoch: 1 [9756/9756]	Loss: 0.8104	LR: 0.097436
Epoch 1 - Average Train Loss: 0.8085, Train Accuracy: 0.5244
Epoch 1 training time consumed: 322.15s
Evaluating Network.....
Test set: Epoch: 1, Average loss: 0.0804, Accuracy: 0.5545, Time consumed:8.23s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_05h_07m_14s/ResNet18-MUCAC-seed6-ret25-1-best.pth
Training Epoch: 2 [256/9756]	Loss: 0.7697	LR: 0.100000
Training Epoch: 2 [512/9756]	Loss: 0.6805	LR: 0.100000
Training Epoch: 2 [768/9756]	Loss: 0.7241	LR: 0.100000
Training Epoch: 2 [1024/9756]	Loss: 0.8881	LR: 0.100000
Training Epoch: 2 [1280/9756]	Loss: 0.6917	LR: 0.100000
Training Epoch: 2 [1536/9756]	Loss: 0.7461	LR: 0.100000
Training Epoch: 2 [1792/9756]	Loss: 0.6847	LR: 0.100000
Training Epoch: 2 [2048/9756]	Loss: 0.7121	LR: 0.100000
Training Epoch: 2 [2304/9756]	Loss: 0.7827	LR: 0.100000
Training Epoch: 2 [2560/9756]	Loss: 0.6826	LR: 0.100000
Training Epoch: 2 [2816/9756]	Loss: 0.7752	LR: 0.100000
Training Epoch: 2 [3072/9756]	Loss: 0.7756	LR: 0.100000
Training Epoch: 2 [3328/9756]	Loss: 0.7149	LR: 0.100000
Training Epoch: 2 [3584/9756]	Loss: 0.7626	LR: 0.100000
Training Epoch: 2 [3840/9756]	Loss: 0.7213	LR: 0.100000
Training Epoch: 2 [4096/9756]	Loss: 0.6940	LR: 0.100000
Training Epoch: 2 [4352/9756]	Loss: 0.7338	LR: 0.100000
Training Epoch: 2 [4608/9756]	Loss: 0.7456	LR: 0.100000
Training Epoch: 2 [4864/9756]	Loss: 0.7142	LR: 0.100000
Training Epoch: 2 [5120/9756]	Loss: 0.7367	LR: 0.100000
Training Epoch: 2 [5376/9756]	Loss: 0.7218	LR: 0.100000
Training Epoch: 2 [5632/9756]	Loss: 0.7124	LR: 0.100000
Training Epoch: 2 [5888/9756]	Loss: 0.7007	LR: 0.100000
Training Epoch: 2 [6144/9756]	Loss: 0.7177	LR: 0.100000
Training Epoch: 2 [6400/9756]	Loss: 0.7079	LR: 0.100000
Training Epoch: 2 [6656/9756]	Loss: 0.6776	LR: 0.100000
Training Epoch: 2 [6912/9756]	Loss: 0.7101	LR: 0.100000
Training Epoch: 2 [7168/9756]	Loss: 0.6994	LR: 0.100000
Training Epoch: 2 [7424/9756]	Loss: 0.7161	LR: 0.100000
Training Epoch: 2 [7680/9756]	Loss: 0.7031	LR: 0.100000
Training Epoch: 2 [7936/9756]	Loss: 0.6836	LR: 0.100000
Training Epoch: 2 [8192/9756]	Loss: 0.6790	LR: 0.100000
Training Epoch: 2 [8448/9756]	Loss: 0.6983	LR: 0.100000
Training Epoch: 2 [8704/9756]	Loss: 0.6650	LR: 0.100000
Training Epoch: 2 [8960/9756]	Loss: 0.6696	LR: 0.100000
Training Epoch: 2 [9216/9756]	Loss: 0.6737	LR: 0.100000
Training Epoch: 2 [9472/9756]	Loss: 0.6671	LR: 0.100000
Training Epoch: 2 [9728/9756]	Loss: 0.6678	LR: 0.100000
Training Epoch: 2 [9756/9756]	Loss: 0.6859	LR: 0.100000
Epoch 2 - Average Train Loss: 0.7159, Train Accuracy: 0.5263
Epoch 2 training time consumed: 141.09s
Evaluating Network.....
Test set: Epoch: 2, Average loss: 0.0031, Accuracy: 0.5419, Time consumed:8.02s
Training Epoch: 3 [256/9756]	Loss: 0.7183	LR: 0.100000
Training Epoch: 3 [512/9756]	Loss: 0.6951	LR: 0.100000
Training Epoch: 3 [768/9756]	Loss: 0.7028	LR: 0.100000
Training Epoch: 3 [1024/9756]	Loss: 0.7061	LR: 0.100000
Training Epoch: 3 [1280/9756]	Loss: 0.6893	LR: 0.100000
Training Epoch: 3 [1536/9756]	Loss: 0.6695	LR: 0.100000
Training Epoch: 3 [1792/9756]	Loss: 0.7004	LR: 0.100000
Training Epoch: 3 [2048/9756]	Loss: 0.6764	LR: 0.100000
Training Epoch: 3 [2304/9756]	Loss: 0.7155	LR: 0.100000
Training Epoch: 3 [2560/9756]	Loss: 0.6867	LR: 0.100000
Training Epoch: 3 [2816/9756]	Loss: 0.7019	LR: 0.100000
Training Epoch: 3 [3072/9756]	Loss: 0.6853	LR: 0.100000
Training Epoch: 3 [3328/9756]	Loss: 0.6957	LR: 0.100000
Training Epoch: 3 [3584/9756]	Loss: 0.6875	LR: 0.100000
Training Epoch: 3 [3840/9756]	Loss: 0.6871	LR: 0.100000
Training Epoch: 3 [4096/9756]	Loss: 0.6721	LR: 0.100000
Training Epoch: 3 [4352/9756]	Loss: 0.6736	LR: 0.100000
Training Epoch: 3 [4608/9756]	Loss: 0.6666	LR: 0.100000
Training Epoch: 3 [4864/9756]	Loss: 0.6628	LR: 0.100000
Training Epoch: 3 [5120/9756]	Loss: 0.6339	LR: 0.100000
Training Epoch: 3 [5376/9756]	Loss: 0.6816	LR: 0.100000
Training Epoch: 3 [5632/9756]	Loss: 0.6807	LR: 0.100000
Training Epoch: 3 [5888/9756]	Loss: 0.7030	LR: 0.100000
Training Epoch: 3 [6144/9756]	Loss: 0.6775	LR: 0.100000
Training Epoch: 3 [6400/9756]	Loss: 0.6735	LR: 0.100000
Training Epoch: 3 [6656/9756]	Loss: 0.6519	LR: 0.100000
Training Epoch: 3 [6912/9756]	Loss: 0.6593	LR: 0.100000
Training Epoch: 3 [7168/9756]	Loss: 0.6956	LR: 0.100000
Training Epoch: 3 [7424/9756]	Loss: 0.7010	LR: 0.100000
Training Epoch: 3 [7680/9756]	Loss: 0.6879	LR: 0.100000
Training Epoch: 3 [7936/9756]	Loss: 0.6875	LR: 0.100000
Training Epoch: 3 [8192/9756]	Loss: 0.6572	LR: 0.100000
Training Epoch: 3 [8448/9756]	Loss: 0.6943	LR: 0.100000
Training Epoch: 3 [8704/9756]	Loss: 0.6746	LR: 0.100000
Training Epoch: 3 [8960/9756]	Loss: 0.6610	LR: 0.100000
Training Epoch: 3 [9216/9756]	Loss: 0.6656	LR: 0.100000
Training Epoch: 3 [9472/9756]	Loss: 0.6981	LR: 0.100000
Training Epoch: 3 [9728/9756]	Loss: 0.7028	LR: 0.100000
Training Epoch: 3 [9756/9756]	Loss: 0.6643	LR: 0.100000
Epoch 3 - Average Train Loss: 0.6836, Train Accuracy: 0.5696
Epoch 3 training time consumed: 140.75s
Evaluating Network.....
Test set: Epoch: 3, Average loss: 0.0031, Accuracy: 0.5554, Time consumed:7.76s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_05h_07m_14s/ResNet18-MUCAC-seed6-ret25-3-best.pth
Training Epoch: 4 [256/9756]	Loss: 0.6804	LR: 0.100000
Training Epoch: 4 [512/9756]	Loss: 0.6714	LR: 0.100000
Training Epoch: 4 [768/9756]	Loss: 0.6840	LR: 0.100000
Training Epoch: 4 [1024/9756]	Loss: 0.6893	LR: 0.100000
Training Epoch: 4 [1280/9756]	Loss: 0.7021	LR: 0.100000
Training Epoch: 4 [1536/9756]	Loss: 0.6682	LR: 0.100000
Training Epoch: 4 [1792/9756]	Loss: 0.6759	LR: 0.100000
Training Epoch: 4 [2048/9756]	Loss: 0.6938	LR: 0.100000
Training Epoch: 4 [2304/9756]	Loss: 0.6770	LR: 0.100000
Training Epoch: 4 [2560/9756]	Loss: 0.6729	LR: 0.100000
Training Epoch: 4 [2816/9756]	Loss: 0.6740	LR: 0.100000
Training Epoch: 4 [3072/9756]	Loss: 0.6594	LR: 0.100000
Training Epoch: 4 [3328/9756]	Loss: 0.6573	LR: 0.100000
Training Epoch: 4 [3584/9756]	Loss: 0.6805	LR: 0.100000
Training Epoch: 4 [3840/9756]	Loss: 0.6559	LR: 0.100000
Training Epoch: 4 [4096/9756]	Loss: 0.6780	LR: 0.100000
Training Epoch: 4 [4352/9756]	Loss: 0.6527	LR: 0.100000
Training Epoch: 4 [4608/9756]	Loss: 0.6873	LR: 0.100000
Training Epoch: 4 [4864/9756]	Loss: 0.6485	LR: 0.100000
Training Epoch: 4 [5120/9756]	Loss: 0.6492	LR: 0.100000
Training Epoch: 4 [5376/9756]	Loss: 0.6906	LR: 0.100000
Training Epoch: 4 [5632/9756]	Loss: 0.6708	LR: 0.100000
Training Epoch: 4 [5888/9756]	Loss: 0.6722	LR: 0.100000
Training Epoch: 4 [6144/9756]	Loss: 0.6559	LR: 0.100000
Training Epoch: 4 [6400/9756]	Loss: 0.6604	LR: 0.100000
Training Epoch: 4 [6656/9756]	Loss: 0.6506	LR: 0.100000
Training Epoch: 4 [6912/9756]	Loss: 0.6874	LR: 0.100000
Training Epoch: 4 [7168/9756]	Loss: 0.6655	LR: 0.100000
Training Epoch: 4 [7424/9756]	Loss: 0.6641	LR: 0.100000
Training Epoch: 4 [7680/9756]	Loss: 0.6549	LR: 0.100000
Training Epoch: 4 [7936/9756]	Loss: 0.6557	LR: 0.100000
Training Epoch: 4 [8192/9756]	Loss: 0.6613	LR: 0.100000
Training Epoch: 4 [8448/9756]	Loss: 0.6920	LR: 0.100000
Training Epoch: 4 [8704/9756]	Loss: 0.6413	LR: 0.100000
Training Epoch: 4 [8960/9756]	Loss: 0.6599	LR: 0.100000
Training Epoch: 4 [9216/9756]	Loss: 0.6862	LR: 0.100000
Training Epoch: 4 [9472/9756]	Loss: 0.6783	LR: 0.100000
Training Epoch: 4 [9728/9756]	Loss: 0.6861	LR: 0.100000
Training Epoch: 4 [9756/9756]	Loss: 0.7384	LR: 0.100000
Epoch 4 - Average Train Loss: 0.6710, Train Accuracy: 0.5951
Epoch 4 training time consumed: 141.04s
Evaluating Network.....
Test set: Epoch: 4, Average loss: 0.0032, Accuracy: 0.5477, Time consumed:7.89s
Training Epoch: 5 [256/9756]	Loss: 0.6505	LR: 0.100000
Training Epoch: 5 [512/9756]	Loss: 0.6860	LR: 0.100000
Training Epoch: 5 [768/9756]	Loss: 0.6714	LR: 0.100000
Training Epoch: 5 [1024/9756]	Loss: 0.6789	LR: 0.100000
Training Epoch: 5 [1280/9756]	Loss: 0.6809	LR: 0.100000
Training Epoch: 5 [1536/9756]	Loss: 0.6738	LR: 0.100000
Training Epoch: 5 [1792/9756]	Loss: 0.6666	LR: 0.100000
Training Epoch: 5 [2048/9756]	Loss: 0.6669	LR: 0.100000
Training Epoch: 5 [2304/9756]	Loss: 0.6613	LR: 0.100000
Training Epoch: 5 [2560/9756]	Loss: 0.6575	LR: 0.100000
Training Epoch: 5 [2816/9756]	Loss: 0.6647	LR: 0.100000
Training Epoch: 5 [3072/9756]	Loss: 0.6599	LR: 0.100000
Training Epoch: 5 [3328/9756]	Loss: 0.6741	LR: 0.100000
Training Epoch: 5 [3584/9756]	Loss: 0.6692	LR: 0.100000
Training Epoch: 5 [3840/9756]	Loss: 0.6548	LR: 0.100000
Training Epoch: 5 [4096/9756]	Loss: 0.6729	LR: 0.100000
Training Epoch: 5 [4352/9756]	Loss: 0.6677	LR: 0.100000
Training Epoch: 5 [4608/9756]	Loss: 0.6830	LR: 0.100000
Training Epoch: 5 [4864/9756]	Loss: 0.6623	LR: 0.100000
Training Epoch: 5 [5120/9756]	Loss: 0.6846	LR: 0.100000
Training Epoch: 5 [5376/9756]	Loss: 0.6709	LR: 0.100000
Training Epoch: 5 [5632/9756]	Loss: 0.6628	LR: 0.100000
Training Epoch: 5 [5888/9756]	Loss: 0.6686	LR: 0.100000
Training Epoch: 5 [6144/9756]	Loss: 0.6604	LR: 0.100000
Training Epoch: 5 [6400/9756]	Loss: 0.6674	LR: 0.100000
Training Epoch: 5 [6656/9756]	Loss: 0.6846	LR: 0.100000
Training Epoch: 5 [6912/9756]	Loss: 0.7073	LR: 0.100000
Training Epoch: 5 [7168/9756]	Loss: 0.6649	LR: 0.100000
Training Epoch: 5 [7424/9756]	Loss: 0.6521	LR: 0.100000
Training Epoch: 5 [7680/9756]	Loss: 0.6901	LR: 0.100000
Training Epoch: 5 [7936/9756]	Loss: 0.6608	LR: 0.100000
Training Epoch: 5 [8192/9756]	Loss: 0.6392	LR: 0.100000
Training Epoch: 5 [8448/9756]	Loss: 0.6508	LR: 0.100000
Training Epoch: 5 [8704/9756]	Loss: 0.6732	LR: 0.100000
Training Epoch: 5 [8960/9756]	Loss: 0.6943	LR: 0.100000
Training Epoch: 5 [9216/9756]	Loss: 0.6780	LR: 0.100000
Training Epoch: 5 [9472/9756]	Loss: 0.6722	LR: 0.100000
Training Epoch: 5 [9728/9756]	Loss: 0.6606	LR: 0.100000
Training Epoch: 5 [9756/9756]	Loss: 0.6579	LR: 0.100000
Epoch 5 - Average Train Loss: 0.6696, Train Accuracy: 0.5967
Epoch 5 training time consumed: 140.47s
Evaluating Network.....
Test set: Epoch: 5, Average loss: 0.0035, Accuracy: 0.4910, Time consumed:8.11s
Training Epoch: 6 [256/9756]	Loss: 0.6697	LR: 0.100000
Training Epoch: 6 [512/9756]	Loss: 0.6536	LR: 0.100000
Training Epoch: 6 [768/9756]	Loss: 0.6513	LR: 0.100000
Training Epoch: 6 [1024/9756]	Loss: 0.6659	LR: 0.100000
Training Epoch: 6 [1280/9756]	Loss: 0.6775	LR: 0.100000
Training Epoch: 6 [1536/9756]	Loss: 0.6692	LR: 0.100000
Training Epoch: 6 [1792/9756]	Loss: 0.6763	LR: 0.100000
Training Epoch: 6 [2048/9756]	Loss: 0.6676	LR: 0.100000
Training Epoch: 6 [2304/9756]	Loss: 0.6734	LR: 0.100000
Training Epoch: 6 [2560/9756]	Loss: 0.7033	LR: 0.100000
Training Epoch: 6 [2816/9756]	Loss: 0.6735	LR: 0.100000
Training Epoch: 6 [3072/9756]	Loss: 0.6630	LR: 0.100000
Training Epoch: 6 [3328/9756]	Loss: 0.6693	LR: 0.100000
Training Epoch: 6 [3584/9756]	Loss: 0.6397	LR: 0.100000
Training Epoch: 6 [3840/9756]	Loss: 0.6595	LR: 0.100000
Training Epoch: 6 [4096/9756]	Loss: 0.6678	LR: 0.100000
Training Epoch: 6 [4352/9756]	Loss: 0.6471	LR: 0.100000
Training Epoch: 6 [4608/9756]	Loss: 0.6830	LR: 0.100000
Training Epoch: 6 [4864/9756]	Loss: 0.6532	LR: 0.100000
Training Epoch: 6 [5120/9756]	Loss: 0.6693	LR: 0.100000
Training Epoch: 6 [5376/9756]	Loss: 0.6357	LR: 0.100000
Training Epoch: 6 [5632/9756]	Loss: 0.6771	LR: 0.100000
Training Epoch: 6 [5888/9756]	Loss: 0.6675	LR: 0.100000
Training Epoch: 6 [6144/9756]	Loss: 0.6381	LR: 0.100000
Training Epoch: 6 [6400/9756]	Loss: 0.6676	LR: 0.100000
Training Epoch: 6 [6656/9756]	Loss: 0.6761	LR: 0.100000
Training Epoch: 6 [6912/9756]	Loss: 0.6624	LR: 0.100000
Training Epoch: 6 [7168/9756]	Loss: 0.6462	LR: 0.100000
Training Epoch: 6 [7424/9756]	Loss: 0.6350	LR: 0.100000
Training Epoch: 6 [7680/9756]	Loss: 0.6650	LR: 0.100000
Training Epoch: 6 [7936/9756]	Loss: 0.6399	LR: 0.100000
Training Epoch: 6 [8192/9756]	Loss: 0.6585	LR: 0.100000
Training Epoch: 6 [8448/9756]	Loss: 0.6542	LR: 0.100000
Training Epoch: 6 [8704/9756]	Loss: 0.6728	LR: 0.100000
Training Epoch: 6 [8960/9756]	Loss: 0.6787	LR: 0.100000
Training Epoch: 6 [9216/9756]	Loss: 0.6647	LR: 0.100000
Training Epoch: 6 [9472/9756]	Loss: 0.6691	LR: 0.100000
Training Epoch: 6 [9728/9756]	Loss: 0.6615	LR: 0.100000
Training Epoch: 6 [9756/9756]	Loss: 0.6018	LR: 0.100000
Epoch 6 - Average Train Loss: 0.6631, Train Accuracy: 0.6042
Epoch 6 training time consumed: 140.76s
Evaluating Network.....
Test set: Epoch: 6, Average loss: 0.0033, Accuracy: 0.5211, Time consumed:8.01s
Training Epoch: 7 [256/9756]	Loss: 0.6435	LR: 0.100000
Training Epoch: 7 [512/9756]	Loss: 0.6513	LR: 0.100000
Training Epoch: 7 [768/9756]	Loss: 0.6522	LR: 0.100000
Training Epoch: 7 [1024/9756]	Loss: 0.6423	LR: 0.100000
Training Epoch: 7 [1280/9756]	Loss: 0.6558	LR: 0.100000
Training Epoch: 7 [1536/9756]	Loss: 0.6795	LR: 0.100000
Training Epoch: 7 [1792/9756]	Loss: 0.6732	LR: 0.100000
Training Epoch: 7 [2048/9756]	Loss: 0.6708	LR: 0.100000
Training Epoch: 7 [2304/9756]	Loss: 0.6773	LR: 0.100000
Training Epoch: 7 [2560/9756]	Loss: 0.6675	LR: 0.100000
Training Epoch: 7 [2816/9756]	Loss: 0.7119	LR: 0.100000
Training Epoch: 7 [3072/9756]	Loss: 0.6723	LR: 0.100000
Training Epoch: 7 [3328/9756]	Loss: 0.6886	LR: 0.100000
Training Epoch: 7 [3584/9756]	Loss: 0.6623	LR: 0.100000
Training Epoch: 7 [3840/9756]	Loss: 0.6705	LR: 0.100000
Training Epoch: 7 [4096/9756]	Loss: 0.6463	LR: 0.100000
Training Epoch: 7 [4352/9756]	Loss: 0.6629	LR: 0.100000
Training Epoch: 7 [4608/9756]	Loss: 0.5925	LR: 0.100000
Training Epoch: 7 [4864/9756]	Loss: 0.6142	LR: 0.100000
Training Epoch: 7 [5120/9756]	Loss: 0.6596	LR: 0.100000
Training Epoch: 7 [5376/9756]	Loss: 0.6248	LR: 0.100000
Training Epoch: 7 [5632/9756]	Loss: 0.6481	LR: 0.100000
Training Epoch: 7 [5888/9756]	Loss: 0.6769	LR: 0.100000
Training Epoch: 7 [6144/9756]	Loss: 0.6240	LR: 0.100000
Training Epoch: 7 [6400/9756]	Loss: 0.6185	LR: 0.100000
Training Epoch: 7 [6656/9756]	Loss: 0.6752	LR: 0.100000
Training Epoch: 7 [6912/9756]	Loss: 0.6510	LR: 0.100000
Training Epoch: 7 [7168/9756]	Loss: 0.6007	LR: 0.100000
Training Epoch: 7 [7424/9756]	Loss: 0.6521	LR: 0.100000
Training Epoch: 7 [7680/9756]	Loss: 0.6148	LR: 0.100000
Training Epoch: 7 [7936/9756]	Loss: 0.5976	LR: 0.100000
Training Epoch: 7 [8192/9756]	Loss: 0.6484	LR: 0.100000
Training Epoch: 7 [8448/9756]	Loss: 0.6175	LR: 0.100000
Training Epoch: 7 [8704/9756]	Loss: 0.5971	LR: 0.100000
Training Epoch: 7 [8960/9756]	Loss: 0.6380	LR: 0.100000
Training Epoch: 7 [9216/9756]	Loss: 0.5724	LR: 0.100000
Training Epoch: 7 [9472/9756]	Loss: 0.6179	LR: 0.100000
Training Epoch: 7 [9728/9756]	Loss: 0.6418	LR: 0.100000
Training Epoch: 7 [9756/9756]	Loss: 0.6868	LR: 0.100000
Epoch 7 - Average Train Loss: 0.6452, Train Accuracy: 0.6274
Epoch 7 training time consumed: 140.57s
Evaluating Network.....
Test set: Epoch: 7, Average loss: 0.0034, Accuracy: 0.5564, Time consumed:7.86s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_05h_07m_14s/ResNet18-MUCAC-seed6-ret25-7-best.pth
Training Epoch: 8 [256/9756]	Loss: 0.6300	LR: 0.100000
Training Epoch: 8 [512/9756]	Loss: 0.6504	LR: 0.100000
Training Epoch: 8 [768/9756]	Loss: 0.6703	LR: 0.100000
Training Epoch: 8 [1024/9756]	Loss: 0.6504	LR: 0.100000
Training Epoch: 8 [1280/9756]	Loss: 0.6482	LR: 0.100000
Training Epoch: 8 [1536/9756]	Loss: 0.6760	LR: 0.100000
Training Epoch: 8 [1792/9756]	Loss: 0.6568	LR: 0.100000
Training Epoch: 8 [2048/9756]	Loss: 0.6549	LR: 0.100000
Training Epoch: 8 [2304/9756]	Loss: 0.6346	LR: 0.100000
Training Epoch: 8 [2560/9756]	Loss: 0.6067	LR: 0.100000
Training Epoch: 8 [2816/9756]	Loss: 0.6373	LR: 0.100000
Training Epoch: 8 [3072/9756]	Loss: 0.6351	LR: 0.100000
Training Epoch: 8 [3328/9756]	Loss: 0.6416	LR: 0.100000
Training Epoch: 8 [3584/9756]	Loss: 0.6202	LR: 0.100000
Training Epoch: 8 [3840/9756]	Loss: 0.5896	LR: 0.100000
Training Epoch: 8 [4096/9756]	Loss: 0.5966	LR: 0.100000
Training Epoch: 8 [4352/9756]	Loss: 0.6468	LR: 0.100000
Training Epoch: 8 [4608/9756]	Loss: 0.6345	LR: 0.100000
Training Epoch: 8 [4864/9756]	Loss: 0.5995	LR: 0.100000
Training Epoch: 8 [5120/9756]	Loss: 0.6264	LR: 0.100000
Training Epoch: 8 [5376/9756]	Loss: 0.6121	LR: 0.100000
Training Epoch: 8 [5632/9756]	Loss: 0.6277	LR: 0.100000
Training Epoch: 8 [5888/9756]	Loss: 0.6065	LR: 0.100000
Training Epoch: 8 [6144/9756]	Loss: 0.5918	LR: 0.100000
Training Epoch: 8 [6400/9756]	Loss: 0.6144	LR: 0.100000
Training Epoch: 8 [6656/9756]	Loss: 0.6302	LR: 0.100000
Training Epoch: 8 [6912/9756]	Loss: 0.6265	LR: 0.100000
Training Epoch: 8 [7168/9756]	Loss: 0.6439	LR: 0.100000
Training Epoch: 8 [7424/9756]	Loss: 0.6229	LR: 0.100000
Training Epoch: 8 [7680/9756]	Loss: 0.6108	LR: 0.100000
Training Epoch: 8 [7936/9756]	Loss: 0.6145	LR: 0.100000
Training Epoch: 8 [8192/9756]	Loss: 0.5597	LR: 0.100000
Training Epoch: 8 [8448/9756]	Loss: 0.6356	LR: 0.100000
Training Epoch: 8 [8704/9756]	Loss: 0.5999	LR: 0.100000
Training Epoch: 8 [8960/9756]	Loss: 0.6254	LR: 0.100000
Training Epoch: 8 [9216/9756]	Loss: 0.6018	LR: 0.100000
Training Epoch: 8 [9472/9756]	Loss: 0.5725	LR: 0.100000
Training Epoch: 8 [9728/9756]	Loss: 0.5617	LR: 0.100000
Training Epoch: 8 [9756/9756]	Loss: 0.7430	LR: 0.100000
Epoch 8 - Average Train Loss: 0.6231, Train Accuracy: 0.6624
Epoch 8 training time consumed: 140.90s
Evaluating Network.....
Test set: Epoch: 8, Average loss: 0.0028, Accuracy: 0.6426, Time consumed:7.86s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_05h_07m_14s/ResNet18-MUCAC-seed6-ret25-8-best.pth
Training Epoch: 9 [256/9756]	Loss: 0.6787	LR: 0.100000
Training Epoch: 9 [512/9756]	Loss: 0.6854	LR: 0.100000
Training Epoch: 9 [768/9756]	Loss: 0.6878	LR: 0.100000
Training Epoch: 9 [1024/9756]	Loss: 0.6837	LR: 0.100000
Training Epoch: 9 [1280/9756]	Loss: 0.6655	LR: 0.100000
Training Epoch: 9 [1536/9756]	Loss: 0.6663	LR: 0.100000
Training Epoch: 9 [1792/9756]	Loss: 0.6769	LR: 0.100000
Training Epoch: 9 [2048/9756]	Loss: 0.6440	LR: 0.100000
Training Epoch: 9 [2304/9756]	Loss: 0.6622	LR: 0.100000
Training Epoch: 9 [2560/9756]	Loss: 0.6465	LR: 0.100000
Training Epoch: 9 [2816/9756]	Loss: 0.6591	LR: 0.100000
Training Epoch: 9 [3072/9756]	Loss: 0.6553	LR: 0.100000
Training Epoch: 9 [3328/9756]	Loss: 0.6552	LR: 0.100000
Training Epoch: 9 [3584/9756]	Loss: 0.6592	LR: 0.100000
Training Epoch: 9 [3840/9756]	Loss: 0.6594	LR: 0.100000
Training Epoch: 9 [4096/9756]	Loss: 0.6647	LR: 0.100000
Training Epoch: 9 [4352/9756]	Loss: 0.6704	LR: 0.100000
Training Epoch: 9 [4608/9756]	Loss: 0.6373	LR: 0.100000
Training Epoch: 9 [4864/9756]	Loss: 0.6081	LR: 0.100000
Training Epoch: 9 [5120/9756]	Loss: 0.6192	LR: 0.100000
Training Epoch: 9 [5376/9756]	Loss: 0.6095	LR: 0.100000
Training Epoch: 9 [5632/9756]	Loss: 0.6287	LR: 0.100000
Training Epoch: 9 [5888/9756]	Loss: 0.5698	LR: 0.100000
Training Epoch: 9 [6144/9756]	Loss: 0.5612	LR: 0.100000
Training Epoch: 9 [6400/9756]	Loss: 0.5791	LR: 0.100000
Training Epoch: 9 [6656/9756]	Loss: 0.6498	LR: 0.100000
Training Epoch: 9 [6912/9756]	Loss: 0.5849	LR: 0.100000
Training Epoch: 9 [7168/9756]	Loss: 0.5781	LR: 0.100000
Training Epoch: 9 [7424/9756]	Loss: 0.5771	LR: 0.100000
Training Epoch: 9 [7680/9756]	Loss: 0.5839	LR: 0.100000
Training Epoch: 9 [7936/9756]	Loss: 0.6058	LR: 0.100000
Training Epoch: 9 [8192/9756]	Loss: 0.5594	LR: 0.100000
Training Epoch: 9 [8448/9756]	Loss: 0.6008	LR: 0.100000
Training Epoch: 9 [8704/9756]	Loss: 0.5741	LR: 0.100000
Training Epoch: 9 [8960/9756]	Loss: 0.6143	LR: 0.100000
Training Epoch: 9 [9216/9756]	Loss: 0.5478	LR: 0.100000
Training Epoch: 9 [9472/9756]	Loss: 0.5253	LR: 0.100000
Training Epoch: 9 [9728/9756]	Loss: 0.5536	LR: 0.100000
Training Epoch: 9 [9756/9756]	Loss: 0.5923	LR: 0.100000
Epoch 9 - Average Train Loss: 0.6233, Train Accuracy: 0.6497
Epoch 9 training time consumed: 140.72s
Evaluating Network.....
Test set: Epoch: 9, Average loss: 0.0030, Accuracy: 0.6673, Time consumed:8.01s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_05h_07m_14s/ResNet18-MUCAC-seed6-ret25-9-best.pth
Training Epoch: 10 [256/9756]	Loss: 0.5976	LR: 0.020000
Training Epoch: 10 [512/9756]	Loss: 0.5950	LR: 0.020000
Training Epoch: 10 [768/9756]	Loss: 0.5897	LR: 0.020000
Training Epoch: 10 [1024/9756]	Loss: 0.5235	LR: 0.020000
Training Epoch: 10 [1280/9756]	Loss: 0.6098	LR: 0.020000
Training Epoch: 10 [1536/9756]	Loss: 0.5806	LR: 0.020000
Training Epoch: 10 [1792/9756]	Loss: 0.5364	LR: 0.020000
Training Epoch: 10 [2048/9756]	Loss: 0.5585	LR: 0.020000
Training Epoch: 10 [2304/9756]	Loss: 0.5457	LR: 0.020000
Training Epoch: 10 [2560/9756]	Loss: 0.5658	LR: 0.020000
Training Epoch: 10 [2816/9756]	Loss: 0.5783	LR: 0.020000
Training Epoch: 10 [3072/9756]	Loss: 0.5676	LR: 0.020000
Training Epoch: 10 [3328/9756]	Loss: 0.5837	LR: 0.020000
Training Epoch: 10 [3584/9756]	Loss: 0.5451	LR: 0.020000
Training Epoch: 10 [3840/9756]	Loss: 0.5720	LR: 0.020000
Training Epoch: 10 [4096/9756]	Loss: 0.5810	LR: 0.020000
Training Epoch: 10 [4352/9756]	Loss: 0.5967	LR: 0.020000
Training Epoch: 10 [4608/9756]	Loss: 0.5492	LR: 0.020000
Training Epoch: 10 [4864/9756]	Loss: 0.5300	LR: 0.020000
Training Epoch: 10 [5120/9756]	Loss: 0.5219	LR: 0.020000
Training Epoch: 10 [5376/9756]	Loss: 0.5500	LR: 0.020000
Training Epoch: 10 [5632/9756]	Loss: 0.5752	LR: 0.020000
Training Epoch: 10 [5888/9756]	Loss: 0.4896	LR: 0.020000
Training Epoch: 10 [6144/9756]	Loss: 0.5422	LR: 0.020000
Training Epoch: 10 [6400/9756]	Loss: 0.4934	LR: 0.020000
Training Epoch: 10 [6656/9756]	Loss: 0.5365	LR: 0.020000
Training Epoch: 10 [6912/9756]	Loss: 0.5517	LR: 0.020000
Training Epoch: 10 [7168/9756]	Loss: 0.5482	LR: 0.020000
Training Epoch: 10 [7424/9756]	Loss: 0.5944	LR: 0.020000
Training Epoch: 10 [7680/9756]	Loss: 0.5648	LR: 0.020000
Training Epoch: 10 [7936/9756]	Loss: 0.5342	LR: 0.020000
Training Epoch: 10 [8192/9756]	Loss: 0.4796	LR: 0.020000
Training Epoch: 10 [8448/9756]	Loss: 0.5499	LR: 0.020000
Training Epoch: 10 [8704/9756]	Loss: 0.5996	LR: 0.020000
Training Epoch: 10 [8960/9756]	Loss: 0.4837	LR: 0.020000
Training Epoch: 10 [9216/9756]	Loss: 0.5424	LR: 0.020000
Training Epoch: 10 [9472/9756]	Loss: 0.5092	LR: 0.020000
Training Epoch: 10 [9728/9756]	Loss: 0.5381	LR: 0.020000
Training Epoch: 10 [9756/9756]	Loss: 0.5290	LR: 0.020000
Epoch 10 - Average Train Loss: 0.5528, Train Accuracy: 0.7253
Epoch 10 training time consumed: 140.85s
Evaluating Network.....
Test set: Epoch: 10, Average loss: 0.0024, Accuracy: 0.7414, Time consumed:7.92s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_05h_07m_14s/ResNet18-MUCAC-seed6-ret25-10-best.pth
Training Epoch: 11 [256/9756]	Loss: 0.5600	LR: 0.020000
Training Epoch: 11 [512/9756]	Loss: 0.5240	LR: 0.020000
Training Epoch: 11 [768/9756]	Loss: 0.5047	LR: 0.020000
Training Epoch: 11 [1024/9756]	Loss: 0.4897	LR: 0.020000
Training Epoch: 11 [1280/9756]	Loss: 0.5086	LR: 0.020000
Training Epoch: 11 [1536/9756]	Loss: 0.5134	LR: 0.020000
Training Epoch: 11 [1792/9756]	Loss: 0.5008	LR: 0.020000
Training Epoch: 11 [2048/9756]	Loss: 0.4664	LR: 0.020000
Training Epoch: 11 [2304/9756]	Loss: 0.5345	LR: 0.020000
Training Epoch: 11 [2560/9756]	Loss: 0.5231	LR: 0.020000
Training Epoch: 11 [2816/9756]	Loss: 0.5027	LR: 0.020000
Training Epoch: 11 [3072/9756]	Loss: 0.4852	LR: 0.020000
Training Epoch: 11 [3328/9756]	Loss: 0.5057	LR: 0.020000
Training Epoch: 11 [3584/9756]	Loss: 0.5298	LR: 0.020000
Training Epoch: 11 [3840/9756]	Loss: 0.4820	LR: 0.020000
Training Epoch: 11 [4096/9756]	Loss: 0.5354	LR: 0.020000
Training Epoch: 11 [4352/9756]	Loss: 0.5087	LR: 0.020000
Training Epoch: 11 [4608/9756]	Loss: 0.4809	LR: 0.020000
Training Epoch: 11 [4864/9756]	Loss: 0.4184	LR: 0.020000
Training Epoch: 11 [5120/9756]	Loss: 0.5127	LR: 0.020000
Training Epoch: 11 [5376/9756]	Loss: 0.5052	LR: 0.020000
Training Epoch: 11 [5632/9756]	Loss: 0.4298	LR: 0.020000
Training Epoch: 11 [5888/9756]	Loss: 0.5044	LR: 0.020000
Training Epoch: 11 [6144/9756]	Loss: 0.4499	LR: 0.020000
Training Epoch: 11 [6400/9756]	Loss: 0.4600	LR: 0.020000
Training Epoch: 11 [6656/9756]	Loss: 0.4731	LR: 0.020000
Training Epoch: 11 [6912/9756]	Loss: 0.5607	LR: 0.020000
Training Epoch: 11 [7168/9756]	Loss: 0.4481	LR: 0.020000
Training Epoch: 11 [7424/9756]	Loss: 0.4973	LR: 0.020000
Training Epoch: 11 [7680/9756]	Loss: 0.4821	LR: 0.020000
Training Epoch: 11 [7936/9756]	Loss: 0.4625	LR: 0.020000
Training Epoch: 11 [8192/9756]	Loss: 0.4833	LR: 0.020000
Training Epoch: 11 [8448/9756]	Loss: 0.4881	LR: 0.020000
Training Epoch: 11 [8704/9756]	Loss: 0.5080	LR: 0.020000
Training Epoch: 11 [8960/9756]	Loss: 0.4657	LR: 0.020000
Training Epoch: 11 [9216/9756]	Loss: 0.5050	LR: 0.020000
Training Epoch: 11 [9472/9756]	Loss: 0.3990	LR: 0.020000
Training Epoch: 11 [9728/9756]	Loss: 0.4393	LR: 0.020000
Training Epoch: 11 [9756/9756]	Loss: 0.5780	LR: 0.020000
Epoch 11 - Average Train Loss: 0.4910, Train Accuracy: 0.7642
Epoch 11 training time consumed: 141.09s
Evaluating Network.....
Test set: Epoch: 11, Average loss: 0.0019, Accuracy: 0.8058, Time consumed:8.01s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_05h_07m_14s/ResNet18-MUCAC-seed6-ret25-11-best.pth
Training Epoch: 12 [256/9756]	Loss: 0.4711	LR: 0.020000
Training Epoch: 12 [512/9756]	Loss: 0.4512	LR: 0.020000
Training Epoch: 12 [768/9756]	Loss: 0.4547	LR: 0.020000
Training Epoch: 12 [1024/9756]	Loss: 0.4789	LR: 0.020000
Training Epoch: 12 [1280/9756]	Loss: 0.4660	LR: 0.020000
Training Epoch: 12 [1536/9756]	Loss: 0.4447	LR: 0.020000
Training Epoch: 12 [1792/9756]	Loss: 0.4584	LR: 0.020000
Training Epoch: 12 [2048/9756]	Loss: 0.4542	LR: 0.020000
Training Epoch: 12 [2304/9756]	Loss: 0.5459	LR: 0.020000
Training Epoch: 12 [2560/9756]	Loss: 0.4010	LR: 0.020000
Training Epoch: 12 [2816/9756]	Loss: 0.4603	LR: 0.020000
Training Epoch: 12 [3072/9756]	Loss: 0.4547	LR: 0.020000
Training Epoch: 12 [3328/9756]	Loss: 0.4913	LR: 0.020000
Training Epoch: 12 [3584/9756]	Loss: 0.4013	LR: 0.020000
Training Epoch: 12 [3840/9756]	Loss: 0.4564	LR: 0.020000
Training Epoch: 12 [4096/9756]	Loss: 0.4433	LR: 0.020000
Training Epoch: 12 [4352/9756]	Loss: 0.3701	LR: 0.020000
Training Epoch: 12 [4608/9756]	Loss: 0.4565	LR: 0.020000
Training Epoch: 12 [4864/9756]	Loss: 0.4602	LR: 0.020000
Training Epoch: 12 [5120/9756]	Loss: 0.4394	LR: 0.020000
Training Epoch: 12 [5376/9756]	Loss: 0.4742	LR: 0.020000
Training Epoch: 12 [5632/9756]	Loss: 0.4053	LR: 0.020000
Training Epoch: 12 [5888/9756]	Loss: 0.4891	LR: 0.020000
Training Epoch: 12 [6144/9756]	Loss: 0.5245	LR: 0.020000
Training Epoch: 12 [6400/9756]	Loss: 0.4849	LR: 0.020000
Training Epoch: 12 [6656/9756]	Loss: 0.4907	LR: 0.020000
Training Epoch: 12 [6912/9756]	Loss: 0.4508	LR: 0.020000
Training Epoch: 12 [7168/9756]	Loss: 0.4318	LR: 0.020000
Training Epoch: 12 [7424/9756]	Loss: 0.4077	LR: 0.020000
Training Epoch: 12 [7680/9756]	Loss: 0.4067	LR: 0.020000
Training Epoch: 12 [7936/9756]	Loss: 0.3988	LR: 0.020000
Training Epoch: 12 [8192/9756]	Loss: 0.4061	LR: 0.020000
Training Epoch: 12 [8448/9756]	Loss: 0.4341	LR: 0.020000
Training Epoch: 12 [8704/9756]	Loss: 0.4736	LR: 0.020000
Training Epoch: 12 [8960/9756]	Loss: 0.4053	LR: 0.020000
Training Epoch: 12 [9216/9756]	Loss: 0.4254	LR: 0.020000
Training Epoch: 12 [9472/9756]	Loss: 0.4534	LR: 0.020000
Training Epoch: 12 [9728/9756]	Loss: 0.3857	LR: 0.020000
Training Epoch: 12 [9756/9756]	Loss: 0.6031	LR: 0.020000
Epoch 12 - Average Train Loss: 0.4480, Train Accuracy: 0.7951
Epoch 12 training time consumed: 140.57s
Evaluating Network.....
Test set: Epoch: 12, Average loss: 0.0022, Accuracy: 0.7613, Time consumed:7.96s
Training Epoch: 13 [256/9756]	Loss: 0.4087	LR: 0.020000
Training Epoch: 13 [512/9756]	Loss: 0.5037	LR: 0.020000
Training Epoch: 13 [768/9756]	Loss: 0.4880	LR: 0.020000
Training Epoch: 13 [1024/9756]	Loss: 0.4597	LR: 0.020000
Training Epoch: 13 [1280/9756]	Loss: 0.4398	LR: 0.020000
Training Epoch: 13 [1536/9756]	Loss: 0.3942	LR: 0.020000
Training Epoch: 13 [1792/9756]	Loss: 0.4206	LR: 0.020000
Training Epoch: 13 [2048/9756]	Loss: 0.3516	LR: 0.020000
Training Epoch: 13 [2304/9756]	Loss: 0.4371	LR: 0.020000
Training Epoch: 13 [2560/9756]	Loss: 0.4085	LR: 0.020000
Training Epoch: 13 [2816/9756]	Loss: 0.4879	LR: 0.020000
Training Epoch: 13 [3072/9756]	Loss: 0.4752	LR: 0.020000
Training Epoch: 13 [3328/9756]	Loss: 0.4195	LR: 0.020000
Training Epoch: 13 [3584/9756]	Loss: 0.3735	LR: 0.020000
Training Epoch: 13 [3840/9756]	Loss: 0.4398	LR: 0.020000
Training Epoch: 13 [4096/9756]	Loss: 0.4042	LR: 0.020000
Training Epoch: 13 [4352/9756]	Loss: 0.4423	LR: 0.020000
Training Epoch: 13 [4608/9756]	Loss: 0.3841	LR: 0.020000
Training Epoch: 13 [4864/9756]	Loss: 0.4268	LR: 0.020000
Training Epoch: 13 [5120/9756]	Loss: 0.4239	LR: 0.020000
Training Epoch: 13 [5376/9756]	Loss: 0.3901	LR: 0.020000
Training Epoch: 13 [5632/9756]	Loss: 0.4073	LR: 0.020000
Training Epoch: 13 [5888/9756]	Loss: 0.3934	LR: 0.020000
Training Epoch: 13 [6144/9756]	Loss: 0.4170	LR: 0.020000
Training Epoch: 13 [6400/9756]	Loss: 0.4333	LR: 0.020000
Training Epoch: 13 [6656/9756]	Loss: 0.3993	LR: 0.020000
Training Epoch: 13 [6912/9756]	Loss: 0.3320	LR: 0.020000
Training Epoch: 13 [7168/9756]	Loss: 0.4137	LR: 0.020000
Training Epoch: 13 [7424/9756]	Loss: 0.3597	LR: 0.020000
Training Epoch: 13 [7680/9756]	Loss: 0.3210	LR: 0.020000
Training Epoch: 13 [7936/9756]	Loss: 0.3895	LR: 0.020000
Training Epoch: 13 [8192/9756]	Loss: 0.3265	LR: 0.020000
Training Epoch: 13 [8448/9756]	Loss: 0.4421	LR: 0.020000
Training Epoch: 13 [8704/9756]	Loss: 0.3650	LR: 0.020000
Training Epoch: 13 [8960/9756]	Loss: 0.3897	LR: 0.020000
Training Epoch: 13 [9216/9756]	Loss: 0.3934	LR: 0.020000
Training Epoch: 13 [9472/9756]	Loss: 0.4122	LR: 0.020000
Training Epoch: 13 [9728/9756]	Loss: 0.4973	LR: 0.020000
Training Epoch: 13 [9756/9756]	Loss: 0.4208	LR: 0.020000
Epoch 13 - Average Train Loss: 0.4124, Train Accuracy: 0.8138
Epoch 13 training time consumed: 141.08s
Evaluating Network.....
Test set: Epoch: 13, Average loss: 0.0018, Accuracy: 0.8160, Time consumed:8.03s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_05h_07m_14s/ResNet18-MUCAC-seed6-ret25-13-best.pth
Training Epoch: 14 [256/9756]	Loss: 0.3868	LR: 0.020000
Training Epoch: 14 [512/9756]	Loss: 0.4860	LR: 0.020000
Training Epoch: 14 [768/9756]	Loss: 0.4147	LR: 0.020000
Training Epoch: 14 [1024/9756]	Loss: 0.4433	LR: 0.020000
Training Epoch: 14 [1280/9756]	Loss: 0.4633	LR: 0.020000
Training Epoch: 14 [1536/9756]	Loss: 0.3574	LR: 0.020000
Training Epoch: 14 [1792/9756]	Loss: 0.4521	LR: 0.020000
Training Epoch: 14 [2048/9756]	Loss: 0.3373	LR: 0.020000
Training Epoch: 14 [2304/9756]	Loss: 0.4270	LR: 0.020000
Training Epoch: 14 [2560/9756]	Loss: 0.3228	LR: 0.020000
Training Epoch: 14 [2816/9756]	Loss: 0.4296	LR: 0.020000
Training Epoch: 14 [3072/9756]	Loss: 0.4826	LR: 0.020000
Training Epoch: 14 [3328/9756]	Loss: 0.4463	LR: 0.020000
Training Epoch: 14 [3584/9756]	Loss: 0.3865	LR: 0.020000
Training Epoch: 14 [3840/9756]	Loss: 0.4418	LR: 0.020000
Training Epoch: 14 [4096/9756]	Loss: 0.3986	LR: 0.020000
Training Epoch: 14 [4352/9756]	Loss: 0.4287	LR: 0.020000
Training Epoch: 14 [4608/9756]	Loss: 0.3656	LR: 0.020000
Training Epoch: 14 [4864/9756]	Loss: 0.4026	LR: 0.020000
Training Epoch: 14 [5120/9756]	Loss: 0.3670	LR: 0.020000
Training Epoch: 14 [5376/9756]	Loss: 0.4165	LR: 0.020000
Training Epoch: 14 [5632/9756]	Loss: 0.3827	LR: 0.020000
Training Epoch: 14 [5888/9756]	Loss: 0.3231	LR: 0.020000
Training Epoch: 14 [6144/9756]	Loss: 0.3506	LR: 0.020000
Training Epoch: 14 [6400/9756]	Loss: 0.4059	LR: 0.020000
Training Epoch: 14 [6656/9756]	Loss: 0.3800	LR: 0.020000
Training Epoch: 14 [6912/9756]	Loss: 0.3275	LR: 0.020000
Training Epoch: 14 [7168/9756]	Loss: 0.3834	LR: 0.020000
Training Epoch: 14 [7424/9756]	Loss: 0.3710	LR: 0.020000
Training Epoch: 14 [7680/9756]	Loss: 0.3518	LR: 0.020000
Training Epoch: 14 [7936/9756]	Loss: 0.3501	LR: 0.020000
Training Epoch: 14 [8192/9756]	Loss: 0.3358	LR: 0.020000
Training Epoch: 14 [8448/9756]	Loss: 0.3566	LR: 0.020000
Training Epoch: 14 [8704/9756]	Loss: 0.3823	LR: 0.020000
Training Epoch: 14 [8960/9756]	Loss: 0.4088	LR: 0.020000
Training Epoch: 14 [9216/9756]	Loss: 0.3255	LR: 0.020000
Training Epoch: 14 [9472/9756]	Loss: 0.3086	LR: 0.020000
Training Epoch: 14 [9728/9756]	Loss: 0.4412	LR: 0.020000
Training Epoch: 14 [9756/9756]	Loss: 0.5240	LR: 0.020000
Epoch 14 - Average Train Loss: 0.3909, Train Accuracy: 0.8276
Epoch 14 training time consumed: 140.69s
Evaluating Network.....
Test set: Epoch: 14, Average loss: 0.0019, Accuracy: 0.8169, Time consumed:8.15s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_05h_07m_14s/ResNet18-MUCAC-seed6-ret25-14-best.pth
Training Epoch: 15 [256/9756]	Loss: 0.4653	LR: 0.020000
Training Epoch: 15 [512/9756]	Loss: 0.4697	LR: 0.020000
Training Epoch: 15 [768/9756]	Loss: 0.4329	LR: 0.020000
Training Epoch: 15 [1024/9756]	Loss: 0.4271	LR: 0.020000
Training Epoch: 15 [1280/9756]	Loss: 0.3859	LR: 0.020000
Training Epoch: 15 [1536/9756]	Loss: 0.3601	LR: 0.020000
Training Epoch: 15 [1792/9756]	Loss: 0.4256	LR: 0.020000
Training Epoch: 15 [2048/9756]	Loss: 0.3479	LR: 0.020000
Training Epoch: 15 [2304/9756]	Loss: 0.3906	LR: 0.020000
Training Epoch: 15 [2560/9756]	Loss: 0.3758	LR: 0.020000
Training Epoch: 15 [2816/9756]	Loss: 0.3787	LR: 0.020000
Training Epoch: 15 [3072/9756]	Loss: 0.3562	LR: 0.020000
Training Epoch: 15 [3328/9756]	Loss: 0.3830	LR: 0.020000
Training Epoch: 15 [3584/9756]	Loss: 0.3425	LR: 0.020000
Training Epoch: 15 [3840/9756]	Loss: 0.3637	LR: 0.020000
Training Epoch: 15 [4096/9756]	Loss: 0.3764	LR: 0.020000
Training Epoch: 15 [4352/9756]	Loss: 0.3405	LR: 0.020000
Training Epoch: 15 [4608/9756]	Loss: 0.3741	LR: 0.020000
Training Epoch: 15 [4864/9756]	Loss: 0.2710	LR: 0.020000
Training Epoch: 15 [5120/9756]	Loss: 0.3563	LR: 0.020000
Training Epoch: 15 [5376/9756]	Loss: 0.3786	LR: 0.020000
Training Epoch: 15 [5632/9756]	Loss: 0.3386	LR: 0.020000
Training Epoch: 15 [5888/9756]	Loss: 0.3423	LR: 0.020000
Training Epoch: 15 [6144/9756]	Loss: 0.3317	LR: 0.020000
Training Epoch: 15 [6400/9756]	Loss: 0.3436	LR: 0.020000
Training Epoch: 15 [6656/9756]	Loss: 0.3051	LR: 0.020000
Training Epoch: 15 [6912/9756]	Loss: 0.3465	LR: 0.020000
Training Epoch: 15 [7168/9756]	Loss: 0.3673	LR: 0.020000
Training Epoch: 15 [7424/9756]	Loss: 0.2951	LR: 0.020000
Training Epoch: 15 [7680/9756]	Loss: 0.3706	LR: 0.020000
Training Epoch: 15 [7936/9756]	Loss: 0.3128	LR: 0.020000
Training Epoch: 15 [8192/9756]	Loss: 0.3077	LR: 0.020000
Training Epoch: 15 [8448/9756]	Loss: 0.4071	LR: 0.020000
Training Epoch: 15 [8704/9756]	Loss: 0.3306	LR: 0.020000
Training Epoch: 15 [8960/9756]	Loss: 0.3856	LR: 0.020000
Training Epoch: 15 [9216/9756]	Loss: 0.3366	LR: 0.020000
Training Epoch: 15 [9472/9756]	Loss: 0.3301	LR: 0.020000
Training Epoch: 15 [9728/9756]	Loss: 0.3615	LR: 0.020000
Training Epoch: 15 [9756/9756]	Loss: 0.3749	LR: 0.020000
Epoch 15 - Average Train Loss: 0.3636, Train Accuracy: 0.8411
Epoch 15 training time consumed: 140.70s
Evaluating Network.....
Test set: Epoch: 15, Average loss: 0.0049, Accuracy: 0.5666, Time consumed:7.91s
Training Epoch: 16 [256/9756]	Loss: 0.3527	LR: 0.020000
Training Epoch: 16 [512/9756]	Loss: 0.3243	LR: 0.020000
Training Epoch: 16 [768/9756]	Loss: 0.3668	LR: 0.020000
Training Epoch: 16 [1024/9756]	Loss: 0.3405	LR: 0.020000
Training Epoch: 16 [1280/9756]	Loss: 0.3538	LR: 0.020000
Training Epoch: 16 [1536/9756]	Loss: 0.3018	LR: 0.020000
Training Epoch: 16 [1792/9756]	Loss: 0.3058	LR: 0.020000
Training Epoch: 16 [2048/9756]	Loss: 0.2789	LR: 0.020000
Training Epoch: 16 [2304/9756]	Loss: 0.3493	LR: 0.020000
Training Epoch: 16 [2560/9756]	Loss: 0.3471	LR: 0.020000
Training Epoch: 16 [2816/9756]	Loss: 0.3364	LR: 0.020000
Training Epoch: 16 [3072/9756]	Loss: 0.3632	LR: 0.020000
Training Epoch: 16 [3328/9756]	Loss: 0.3726	LR: 0.020000
Training Epoch: 16 [3584/9756]	Loss: 0.3346	LR: 0.020000
Training Epoch: 16 [3840/9756]	Loss: 0.3273	LR: 0.020000
Training Epoch: 16 [4096/9756]	Loss: 0.3741	LR: 0.020000
Training Epoch: 16 [4352/9756]	Loss: 0.2723	LR: 0.020000
Training Epoch: 16 [4608/9756]	Loss: 0.3294	LR: 0.020000
Training Epoch: 16 [4864/9756]	Loss: 0.3378	LR: 0.020000
Training Epoch: 16 [5120/9756]	Loss: 0.3215	LR: 0.020000
Training Epoch: 16 [5376/9756]	Loss: 0.3182	LR: 0.020000
Training Epoch: 16 [5632/9756]	Loss: 0.3052	LR: 0.020000
Training Epoch: 16 [5888/9756]	Loss: 0.3090	LR: 0.020000
Training Epoch: 16 [6144/9756]	Loss: 0.3436	LR: 0.020000
Training Epoch: 16 [6400/9756]	Loss: 0.3195	LR: 0.020000
Training Epoch: 16 [6656/9756]	Loss: 0.2668	LR: 0.020000
Training Epoch: 16 [6912/9756]	Loss: 0.3037	LR: 0.020000
Training Epoch: 16 [7168/9756]	Loss: 0.3047	LR: 0.020000
Training Epoch: 16 [7424/9756]	Loss: 0.3104	LR: 0.020000
Training Epoch: 16 [7680/9756]	Loss: 0.3488	LR: 0.020000
Training Epoch: 16 [7936/9756]	Loss: 0.3038	LR: 0.020000
Training Epoch: 16 [8192/9756]	Loss: 0.2308	LR: 0.020000
Training Epoch: 16 [8448/9756]	Loss: 0.3169	LR: 0.020000
Training Epoch: 16 [8704/9756]	Loss: 0.3139	LR: 0.020000
Training Epoch: 16 [8960/9756]	Loss: 0.2375	LR: 0.020000
Training Epoch: 16 [9216/9756]	Loss: 0.2799	LR: 0.020000
Training Epoch: 16 [9472/9756]	Loss: 0.2996	LR: 0.020000
Training Epoch: 16 [9728/9756]	Loss: 0.3052	LR: 0.020000
Training Epoch: 16 [9756/9756]	Loss: 0.2731	LR: 0.020000
Epoch 16 - Average Train Loss: 0.3185, Train Accuracy: 0.8620
Epoch 16 training time consumed: 140.81s
Evaluating Network.....
Test set: Epoch: 16, Average loss: 0.0016, Accuracy: 0.8600, Time consumed:7.94s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_05h_07m_14s/ResNet18-MUCAC-seed6-ret25-16-best.pth
Training Epoch: 17 [256/9756]	Loss: 0.2779	LR: 0.020000
Training Epoch: 17 [512/9756]	Loss: 0.3400	LR: 0.020000
Training Epoch: 17 [768/9756]	Loss: 0.2758	LR: 0.020000
Training Epoch: 17 [1024/9756]	Loss: 0.3730	LR: 0.020000
Training Epoch: 17 [1280/9756]	Loss: 0.3656	LR: 0.020000
Training Epoch: 17 [1536/9756]	Loss: 0.3770	LR: 0.020000
Training Epoch: 17 [1792/9756]	Loss: 0.2715	LR: 0.020000
Training Epoch: 17 [2048/9756]	Loss: 0.2939	LR: 0.020000
Training Epoch: 17 [2304/9756]	Loss: 0.3418	LR: 0.020000
Training Epoch: 17 [2560/9756]	Loss: 0.2780	LR: 0.020000
Training Epoch: 17 [2816/9756]	Loss: 0.2750	LR: 0.020000
Training Epoch: 17 [3072/9756]	Loss: 0.3833	LR: 0.020000
Training Epoch: 17 [3328/9756]	Loss: 0.2893	LR: 0.020000
Training Epoch: 17 [3584/9756]	Loss: 0.2971	LR: 0.020000
Training Epoch: 17 [3840/9756]	Loss: 0.3173	LR: 0.020000
Training Epoch: 17 [4096/9756]	Loss: 0.3229	LR: 0.020000
Training Epoch: 17 [4352/9756]	Loss: 0.3155	LR: 0.020000
Training Epoch: 17 [4608/9756]	Loss: 0.2787	LR: 0.020000
Training Epoch: 17 [4864/9756]	Loss: 0.2960	LR: 0.020000
Training Epoch: 17 [5120/9756]	Loss: 0.3181	LR: 0.020000
Training Epoch: 17 [5376/9756]	Loss: 0.2446	LR: 0.020000
Training Epoch: 17 [5632/9756]	Loss: 0.3284	LR: 0.020000
Training Epoch: 17 [5888/9756]	Loss: 0.2502	LR: 0.020000
Training Epoch: 17 [6144/9756]	Loss: 0.3207	LR: 0.020000
Training Epoch: 17 [6400/9756]	Loss: 0.2560	LR: 0.020000
Training Epoch: 17 [6656/9756]	Loss: 0.2822	LR: 0.020000
Training Epoch: 17 [6912/9756]	Loss: 0.3375	LR: 0.020000
Training Epoch: 17 [7168/9756]	Loss: 0.2734	LR: 0.020000
Training Epoch: 17 [7424/9756]	Loss: 0.2612	LR: 0.020000
Training Epoch: 17 [7680/9756]	Loss: 0.2569	LR: 0.020000
Training Epoch: 17 [7936/9756]	Loss: 0.2912	LR: 0.020000
Training Epoch: 17 [8192/9756]	Loss: 0.2600	LR: 0.020000
Training Epoch: 17 [8448/9756]	Loss: 0.2790	LR: 0.020000
Training Epoch: 17 [8704/9756]	Loss: 0.2923	LR: 0.020000
Training Epoch: 17 [8960/9756]	Loss: 0.2998	LR: 0.020000
Training Epoch: 17 [9216/9756]	Loss: 0.2922	LR: 0.020000
Training Epoch: 17 [9472/9756]	Loss: 0.2413	LR: 0.020000
Training Epoch: 17 [9728/9756]	Loss: 0.2701	LR: 0.020000
Training Epoch: 17 [9756/9756]	Loss: 0.3656	LR: 0.020000
Epoch 17 - Average Train Loss: 0.2982, Train Accuracy: 0.8746
Epoch 17 training time consumed: 140.68s
Evaluating Network.....
Test set: Epoch: 17, Average loss: 0.0012, Accuracy: 0.8867, Time consumed:8.09s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_05h_07m_14s/ResNet18-MUCAC-seed6-ret25-17-best.pth
Training Epoch: 18 [256/9756]	Loss: 0.2446	LR: 0.020000
Training Epoch: 18 [512/9756]	Loss: 0.2006	LR: 0.020000
Training Epoch: 18 [768/9756]	Loss: 0.2701	LR: 0.020000
Training Epoch: 18 [1024/9756]	Loss: 0.2939	LR: 0.020000
Training Epoch: 18 [1280/9756]	Loss: 0.2767	LR: 0.020000
Training Epoch: 18 [1536/9756]	Loss: 0.2632	LR: 0.020000
Training Epoch: 18 [1792/9756]	Loss: 0.2094	LR: 0.020000
Training Epoch: 18 [2048/9756]	Loss: 0.2626	LR: 0.020000
Training Epoch: 18 [2304/9756]	Loss: 0.2518	LR: 0.020000
Training Epoch: 18 [2560/9756]	Loss: 0.2959	LR: 0.020000
Training Epoch: 18 [2816/9756]	Loss: 0.2433	LR: 0.020000
Training Epoch: 18 [3072/9756]	Loss: 0.2881	LR: 0.020000
Training Epoch: 18 [3328/9756]	Loss: 0.2888	LR: 0.020000
Training Epoch: 18 [3584/9756]	Loss: 0.2373	LR: 0.020000
Training Epoch: 18 [3840/9756]	Loss: 0.2865	LR: 0.020000
Training Epoch: 18 [4096/9756]	Loss: 0.2373	LR: 0.020000
Training Epoch: 18 [4352/9756]	Loss: 0.2503	LR: 0.020000
Training Epoch: 18 [4608/9756]	Loss: 0.3001	LR: 0.020000
Training Epoch: 18 [4864/9756]	Loss: 0.3425	LR: 0.020000
Training Epoch: 18 [5120/9756]	Loss: 0.2591	LR: 0.020000
Training Epoch: 18 [5376/9756]	Loss: 0.2826	LR: 0.020000
Training Epoch: 18 [5632/9756]	Loss: 0.2594	LR: 0.020000
Training Epoch: 18 [5888/9756]	Loss: 0.2538	LR: 0.020000
Training Epoch: 18 [6144/9756]	Loss: 0.2454	LR: 0.020000
Training Epoch: 18 [6400/9756]	Loss: 0.2601	LR: 0.020000
Training Epoch: 18 [6656/9756]	Loss: 0.2447	LR: 0.020000
Training Epoch: 18 [6912/9756]	Loss: 0.2620	LR: 0.020000
Training Epoch: 18 [7168/9756]	Loss: 0.2533	LR: 0.020000
Training Epoch: 18 [7424/9756]	Loss: 0.2844	LR: 0.020000
Training Epoch: 18 [7680/9756]	Loss: 0.3329	LR: 0.020000
Training Epoch: 18 [7936/9756]	Loss: 0.2652	LR: 0.020000
Training Epoch: 18 [8192/9756]	Loss: 0.2881	LR: 0.020000
Training Epoch: 18 [8448/9756]	Loss: 0.2259	LR: 0.020000
Training Epoch: 18 [8704/9756]	Loss: 0.2433	LR: 0.020000
Training Epoch: 18 [8960/9756]	Loss: 0.2538	LR: 0.020000
Training Epoch: 18 [9216/9756]	Loss: 0.2115	LR: 0.020000
Training Epoch: 18 [9472/9756]	Loss: 0.2037	LR: 0.020000
Training Epoch: 18 [9728/9756]	Loss: 0.2009	LR: 0.020000
Training Epoch: 18 [9756/9756]	Loss: 0.6440	LR: 0.020000
Epoch 18 - Average Train Loss: 0.2609, Train Accuracy: 0.8913
Epoch 18 training time consumed: 140.83s
Evaluating Network.....
Test set: Epoch: 18, Average loss: 0.0016, Accuracy: 0.8567, Time consumed:8.06s
Training Epoch: 19 [256/9756]	Loss: 0.2208	LR: 0.020000
Training Epoch: 19 [512/9756]	Loss: 0.2438	LR: 0.020000
Training Epoch: 19 [768/9756]	Loss: 0.2437	LR: 0.020000
Training Epoch: 19 [1024/9756]	Loss: 0.3003	LR: 0.020000
Training Epoch: 19 [1280/9756]	Loss: 0.2241	LR: 0.020000
Training Epoch: 19 [1536/9756]	Loss: 0.3111	LR: 0.020000
Training Epoch: 19 [1792/9756]	Loss: 0.2102	LR: 0.020000
Training Epoch: 19 [2048/9756]	Loss: 0.2375	LR: 0.020000
Training Epoch: 19 [2304/9756]	Loss: 0.1968	LR: 0.020000
Training Epoch: 19 [2560/9756]	Loss: 0.2563	LR: 0.020000
Training Epoch: 19 [2816/9756]	Loss: 0.3067	LR: 0.020000
Training Epoch: 19 [3072/9756]	Loss: 0.2262	LR: 0.020000
Training Epoch: 19 [3328/9756]	Loss: 0.2350	LR: 0.020000
Training Epoch: 19 [3584/9756]	Loss: 0.1872	LR: 0.020000
Training Epoch: 19 [3840/9756]	Loss: 0.2684	LR: 0.020000
Training Epoch: 19 [4096/9756]	Loss: 0.1974	LR: 0.020000
Training Epoch: 19 [4352/9756]	Loss: 0.1769	LR: 0.020000
Training Epoch: 19 [4608/9756]	Loss: 0.2280	LR: 0.020000
Training Epoch: 19 [4864/9756]	Loss: 0.2449	LR: 0.020000
Training Epoch: 19 [5120/9756]	Loss: 0.1951	LR: 0.020000
Training Epoch: 19 [5376/9756]	Loss: 0.2645	LR: 0.020000
Training Epoch: 19 [5632/9756]	Loss: 0.3454	LR: 0.020000
Training Epoch: 19 [5888/9756]	Loss: 0.2838	LR: 0.020000
Training Epoch: 19 [6144/9756]	Loss: 0.2748	LR: 0.020000
Training Epoch: 19 [6400/9756]	Loss: 0.2562	LR: 0.020000
Training Epoch: 19 [6656/9756]	Loss: 0.2694	LR: 0.020000
Training Epoch: 19 [6912/9756]	Loss: 0.2238	LR: 0.020000
Training Epoch: 19 [7168/9756]	Loss: 0.2542	LR: 0.020000
Training Epoch: 19 [7424/9756]	Loss: 0.2121	LR: 0.020000
Training Epoch: 19 [7680/9756]	Loss: 0.2043	LR: 0.020000
Training Epoch: 19 [7936/9756]	Loss: 0.2485	LR: 0.020000
Training Epoch: 19 [8192/9756]	Loss: 0.2412	LR: 0.020000
Training Epoch: 19 [8448/9756]	Loss: 0.1844	LR: 0.020000
Training Epoch: 19 [8704/9756]	Loss: 0.2681	LR: 0.020000
Training Epoch: 19 [8960/9756]	Loss: 0.2671	LR: 0.020000
Training Epoch: 19 [9216/9756]	Loss: 0.2213	LR: 0.020000
Training Epoch: 19 [9472/9756]	Loss: 0.2427	LR: 0.020000
Training Epoch: 19 [9728/9756]	Loss: 0.2292	LR: 0.020000
Training Epoch: 19 [9756/9756]	Loss: 0.0887	LR: 0.020000
Epoch 19 - Average Train Loss: 0.2417, Train Accuracy: 0.9028
Epoch 19 training time consumed: 141.15s
Evaluating Network.....
Test set: Epoch: 19, Average loss: 0.0027, Accuracy: 0.7676, Time consumed:7.87s
Training Epoch: 20 [256/9756]	Loss: 0.2068	LR: 0.004000
Training Epoch: 20 [512/9756]	Loss: 0.2467	LR: 0.004000
Training Epoch: 20 [768/9756]	Loss: 0.2716	LR: 0.004000
Training Epoch: 20 [1024/9756]	Loss: 0.2129	LR: 0.004000
Training Epoch: 20 [1280/9756]	Loss: 0.2015	LR: 0.004000
Training Epoch: 20 [1536/9756]	Loss: 0.2118	LR: 0.004000
Training Epoch: 20 [1792/9756]	Loss: 0.2636	LR: 0.004000
Training Epoch: 20 [2048/9756]	Loss: 0.2114	LR: 0.004000
Training Epoch: 20 [2304/9756]	Loss: 0.2272	LR: 0.004000
Training Epoch: 20 [2560/9756]	Loss: 0.2589	LR: 0.004000
Training Epoch: 20 [2816/9756]	Loss: 0.1929	LR: 0.004000
Training Epoch: 20 [3072/9756]	Loss: 0.2309	LR: 0.004000
Training Epoch: 20 [3328/9756]	Loss: 0.1783	LR: 0.004000
Training Epoch: 20 [3584/9756]	Loss: 0.1467	LR: 0.004000
Training Epoch: 20 [3840/9756]	Loss: 0.2150	LR: 0.004000
Training Epoch: 20 [4096/9756]	Loss: 0.2251	LR: 0.004000
Training Epoch: 20 [4352/9756]	Loss: 0.1659	LR: 0.004000
Training Epoch: 20 [4608/9756]	Loss: 0.2217	LR: 0.004000
Training Epoch: 20 [4864/9756]	Loss: 0.2060	LR: 0.004000
Training Epoch: 20 [5120/9756]	Loss: 0.2916	LR: 0.004000
Training Epoch: 20 [5376/9756]	Loss: 0.1918	LR: 0.004000
Training Epoch: 20 [5632/9756]	Loss: 0.2441	LR: 0.004000
Training Epoch: 20 [5888/9756]	Loss: 0.1914	LR: 0.004000
Training Epoch: 20 [6144/9756]	Loss: 0.2125	LR: 0.004000
Training Epoch: 20 [6400/9756]	Loss: 0.1784	LR: 0.004000
Training Epoch: 20 [6656/9756]	Loss: 0.2150	LR: 0.004000
Training Epoch: 20 [6912/9756]	Loss: 0.2191	LR: 0.004000
Training Epoch: 20 [7168/9756]	Loss: 0.1513	LR: 0.004000
Training Epoch: 20 [7424/9756]	Loss: 0.1572	LR: 0.004000
Training Epoch: 20 [7680/9756]	Loss: 0.1830	LR: 0.004000
Training Epoch: 20 [7936/9756]	Loss: 0.1723	LR: 0.004000
Training Epoch: 20 [8192/9756]	Loss: 0.1721	LR: 0.004000
Training Epoch: 20 [8448/9756]	Loss: 0.2063	LR: 0.004000
Training Epoch: 20 [8704/9756]	Loss: 0.2204	LR: 0.004000
Training Epoch: 20 [8960/9756]	Loss: 0.2064	LR: 0.004000
Training Epoch: 20 [9216/9756]	Loss: 0.2033	LR: 0.004000
Training Epoch: 20 [9472/9756]	Loss: 0.1798	LR: 0.004000
Training Epoch: 20 [9728/9756]	Loss: 0.1560	LR: 0.004000
Training Epoch: 20 [9756/9756]	Loss: 0.1797	LR: 0.004000
Epoch 20 - Average Train Loss: 0.2064, Train Accuracy: 0.9159
Epoch 20 training time consumed: 142.23s
Evaluating Network.....
Test set: Epoch: 20, Average loss: 0.0009, Accuracy: 0.9182, Time consumed:8.28s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_05h_07m_14s/ResNet18-MUCAC-seed6-ret25-20-best.pth
Training Epoch: 21 [256/9756]	Loss: 0.1928	LR: 0.004000
Training Epoch: 21 [512/9756]	Loss: 0.1647	LR: 0.004000
Training Epoch: 21 [768/9756]	Loss: 0.2435	LR: 0.004000
Training Epoch: 21 [1024/9756]	Loss: 0.2439	LR: 0.004000
Training Epoch: 21 [1280/9756]	Loss: 0.2251	LR: 0.004000
Training Epoch: 21 [1536/9756]	Loss: 0.1892	LR: 0.004000
Training Epoch: 21 [1792/9756]	Loss: 0.2288	LR: 0.004000
Training Epoch: 21 [2048/9756]	Loss: 0.1965	LR: 0.004000
Training Epoch: 21 [2304/9756]	Loss: 0.1810	LR: 0.004000
Training Epoch: 21 [2560/9756]	Loss: 0.1622	LR: 0.004000
Training Epoch: 21 [2816/9756]	Loss: 0.1913	LR: 0.004000
Training Epoch: 21 [3072/9756]	Loss: 0.1860	LR: 0.004000
Training Epoch: 21 [3328/9756]	Loss: 0.2063	LR: 0.004000
Training Epoch: 21 [3584/9756]	Loss: 0.1603	LR: 0.004000
Training Epoch: 21 [3840/9756]	Loss: 0.2131	LR: 0.004000
Training Epoch: 21 [4096/9756]	Loss: 0.1839	LR: 0.004000
Training Epoch: 21 [4352/9756]	Loss: 0.1994	LR: 0.004000
Training Epoch: 21 [4608/9756]	Loss: 0.2174	LR: 0.004000
Training Epoch: 21 [4864/9756]	Loss: 0.1279	LR: 0.004000
Training Epoch: 21 [5120/9756]	Loss: 0.1963	LR: 0.004000
Training Epoch: 21 [5376/9756]	Loss: 0.2506	LR: 0.004000
Training Epoch: 21 [5632/9756]	Loss: 0.1526	LR: 0.004000
Training Epoch: 21 [5888/9756]	Loss: 0.2092	LR: 0.004000
Training Epoch: 21 [6144/9756]	Loss: 0.1314	LR: 0.004000
Training Epoch: 21 [6400/9756]	Loss: 0.1997	LR: 0.004000
Training Epoch: 21 [6656/9756]	Loss: 0.1639	LR: 0.004000
Training Epoch: 21 [6912/9756]	Loss: 0.1867	LR: 0.004000
Training Epoch: 21 [7168/9756]	Loss: 0.1625	LR: 0.004000
Training Epoch: 21 [7424/9756]	Loss: 0.1592	LR: 0.004000
Training Epoch: 21 [7680/9756]	Loss: 0.1409	LR: 0.004000
Training Epoch: 21 [7936/9756]	Loss: 0.1773	LR: 0.004000
Training Epoch: 21 [8192/9756]	Loss: 0.2017	LR: 0.004000
Training Epoch: 21 [8448/9756]	Loss: 0.2053	LR: 0.004000
Training Epoch: 21 [8704/9756]	Loss: 0.2249	LR: 0.004000
Training Epoch: 21 [8960/9756]	Loss: 0.2365	LR: 0.004000
Training Epoch: 21 [9216/9756]	Loss: 0.1873	LR: 0.004000
Training Epoch: 21 [9472/9756]	Loss: 0.1900	LR: 0.004000
Training Epoch: 21 [9728/9756]	Loss: 0.1650	LR: 0.004000
Training Epoch: 21 [9756/9756]	Loss: 0.1073	LR: 0.004000
Epoch 21 - Average Train Loss: 0.1907, Train Accuracy: 0.9208
Epoch 21 training time consumed: 141.45s
Evaluating Network.....
Test set: Epoch: 21, Average loss: 0.0008, Accuracy: 0.9172, Time consumed:8.15s
Training Epoch: 22 [256/9756]	Loss: 0.1883	LR: 0.004000
Training Epoch: 22 [512/9756]	Loss: 0.1495	LR: 0.004000
Training Epoch: 22 [768/9756]	Loss: 0.2407	LR: 0.004000
Training Epoch: 22 [1024/9756]	Loss: 0.1676	LR: 0.004000
Training Epoch: 22 [1280/9756]	Loss: 0.2231	LR: 0.004000
Training Epoch: 22 [1536/9756]	Loss: 0.2114	LR: 0.004000
Training Epoch: 22 [1792/9756]	Loss: 0.1771	LR: 0.004000
Training Epoch: 22 [2048/9756]	Loss: 0.1498	LR: 0.004000
Training Epoch: 22 [2304/9756]	Loss: 0.1889	LR: 0.004000
Training Epoch: 22 [2560/9756]	Loss: 0.1887	LR: 0.004000
Training Epoch: 22 [2816/9756]	Loss: 0.2079	LR: 0.004000
Training Epoch: 22 [3072/9756]	Loss: 0.1530	LR: 0.004000
Training Epoch: 22 [3328/9756]	Loss: 0.2601	LR: 0.004000
Training Epoch: 22 [3584/9756]	Loss: 0.2001	LR: 0.004000
Training Epoch: 22 [3840/9756]	Loss: 0.1727	LR: 0.004000
Training Epoch: 22 [4096/9756]	Loss: 0.2396	LR: 0.004000
Training Epoch: 22 [4352/9756]	Loss: 0.2342	LR: 0.004000
Training Epoch: 22 [4608/9756]	Loss: 0.1700	LR: 0.004000
Training Epoch: 22 [4864/9756]	Loss: 0.2513	LR: 0.004000
Training Epoch: 22 [5120/9756]	Loss: 0.1713	LR: 0.004000
Training Epoch: 22 [5376/9756]	Loss: 0.1960	LR: 0.004000
Training Epoch: 22 [5632/9756]	Loss: 0.2019	LR: 0.004000
Training Epoch: 22 [5888/9756]	Loss: 0.1793	LR: 0.004000
Training Epoch: 22 [6144/9756]	Loss: 0.2264	LR: 0.004000
Training Epoch: 22 [6400/9756]	Loss: 0.1601	LR: 0.004000
Training Epoch: 22 [6656/9756]	Loss: 0.1743	LR: 0.004000
Training Epoch: 22 [6912/9756]	Loss: 0.2378	LR: 0.004000
Training Epoch: 22 [7168/9756]	Loss: 0.1814	LR: 0.004000
Training Epoch: 22 [7424/9756]	Loss: 0.1963	LR: 0.004000
Training Epoch: 22 [7680/9756]	Loss: 0.1683	LR: 0.004000
Training Epoch: 22 [7936/9756]	Loss: 0.1372	LR: 0.004000
Training Epoch: 22 [8192/9756]	Loss: 0.1960	LR: 0.004000
Training Epoch: 22 [8448/9756]	Loss: 0.1658	LR: 0.004000
Training Epoch: 22 [8704/9756]	Loss: 0.2384	LR: 0.004000
Training Epoch: 22 [8960/9756]	Loss: 0.1958	LR: 0.004000
Training Epoch: 22 [9216/9756]	Loss: 0.1777	LR: 0.004000
Training Epoch: 22 [9472/9756]	Loss: 0.1985	LR: 0.004000
Training Epoch: 22 [9728/9756]	Loss: 0.1671	LR: 0.004000
Training Epoch: 22 [9756/9756]	Loss: 0.5350	LR: 0.004000
Epoch 22 - Average Train Loss: 0.1942, Train Accuracy: 0.9223
Epoch 22 training time consumed: 141.19s
Evaluating Network.....
Test set: Epoch: 22, Average loss: 0.0007, Accuracy: 0.9308, Time consumed:7.94s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_05h_07m_14s/ResNet18-MUCAC-seed6-ret25-22-best.pth
Training Epoch: 23 [256/9756]	Loss: 0.1378	LR: 0.004000
Training Epoch: 23 [512/9756]	Loss: 0.1522	LR: 0.004000
Training Epoch: 23 [768/9756]	Loss: 0.2268	LR: 0.004000
Training Epoch: 23 [1024/9756]	Loss: 0.2402	LR: 0.004000
Training Epoch: 23 [1280/9756]	Loss: 0.2038	LR: 0.004000
Training Epoch: 23 [1536/9756]	Loss: 0.1882	LR: 0.004000
Training Epoch: 23 [1792/9756]	Loss: 0.2321	LR: 0.004000
Training Epoch: 23 [2048/9756]	Loss: 0.2220	LR: 0.004000
Training Epoch: 23 [2304/9756]	Loss: 0.1441	LR: 0.004000
Training Epoch: 23 [2560/9756]	Loss: 0.2275	LR: 0.004000
Training Epoch: 23 [2816/9756]	Loss: 0.1676	LR: 0.004000
Training Epoch: 23 [3072/9756]	Loss: 0.2075	LR: 0.004000
Training Epoch: 23 [3328/9756]	Loss: 0.2230	LR: 0.004000
Training Epoch: 23 [3584/9756]	Loss: 0.2109	LR: 0.004000
Training Epoch: 23 [3840/9756]	Loss: 0.1794	LR: 0.004000
Training Epoch: 23 [4096/9756]	Loss: 0.1773	LR: 0.004000
Training Epoch: 23 [4352/9756]	Loss: 0.1650	LR: 0.004000
Training Epoch: 23 [4608/9756]	Loss: 0.1701	LR: 0.004000
Training Epoch: 23 [4864/9756]	Loss: 0.1266	LR: 0.004000
Training Epoch: 23 [5120/9756]	Loss: 0.1602	LR: 0.004000
Training Epoch: 23 [5376/9756]	Loss: 0.1543	LR: 0.004000
Training Epoch: 23 [5632/9756]	Loss: 0.1760	LR: 0.004000
Training Epoch: 23 [5888/9756]	Loss: 0.2181	LR: 0.004000
Training Epoch: 23 [6144/9756]	Loss: 0.1906	LR: 0.004000
Training Epoch: 23 [6400/9756]	Loss: 0.2026	LR: 0.004000
Training Epoch: 23 [6656/9756]	Loss: 0.1797	LR: 0.004000
Training Epoch: 23 [6912/9756]	Loss: 0.2405	LR: 0.004000
Training Epoch: 23 [7168/9756]	Loss: 0.2317	LR: 0.004000
Training Epoch: 23 [7424/9756]	Loss: 0.1660	LR: 0.004000
Training Epoch: 23 [7680/9756]	Loss: 0.1434	LR: 0.004000
Training Epoch: 23 [7936/9756]	Loss: 0.1706	LR: 0.004000
Training Epoch: 23 [8192/9756]	Loss: 0.2021	LR: 0.004000
Training Epoch: 23 [8448/9756]	Loss: 0.1997	LR: 0.004000
Training Epoch: 23 [8704/9756]	Loss: 0.1927	LR: 0.004000
Training Epoch: 23 [8960/9756]	Loss: 0.1696	LR: 0.004000
Training Epoch: 23 [9216/9756]	Loss: 0.1723	LR: 0.004000
Training Epoch: 23 [9472/9756]	Loss: 0.1638	LR: 0.004000
Training Epoch: 23 [9728/9756]	Loss: 0.1920	LR: 0.004000
Training Epoch: 23 [9756/9756]	Loss: 0.0711	LR: 0.004000
Epoch 23 - Average Train Loss: 0.1872, Train Accuracy: 0.9249
Epoch 23 training time consumed: 141.25s
Evaluating Network.....
Test set: Epoch: 23, Average loss: 0.0008, Accuracy: 0.9254, Time consumed:7.91s
Training Epoch: 24 [256/9756]	Loss: 0.1917	LR: 0.004000
Training Epoch: 24 [512/9756]	Loss: 0.2075	LR: 0.004000
Training Epoch: 24 [768/9756]	Loss: 0.1861	LR: 0.004000
Training Epoch: 24 [1024/9756]	Loss: 0.1485	LR: 0.004000
Training Epoch: 24 [1280/9756]	Loss: 0.2220	LR: 0.004000
Training Epoch: 24 [1536/9756]	Loss: 0.2108	LR: 0.004000
Training Epoch: 24 [1792/9756]	Loss: 0.1615	LR: 0.004000
Training Epoch: 24 [2048/9756]	Loss: 0.1702	LR: 0.004000
Training Epoch: 24 [2304/9756]	Loss: 0.1743	LR: 0.004000
Training Epoch: 24 [2560/9756]	Loss: 0.1347	LR: 0.004000
Training Epoch: 24 [2816/9756]	Loss: 0.1488	LR: 0.004000
Training Epoch: 24 [3072/9756]	Loss: 0.1610	LR: 0.004000
Training Epoch: 24 [3328/9756]	Loss: 0.1363	LR: 0.004000
Training Epoch: 24 [3584/9756]	Loss: 0.1662	LR: 0.004000
Training Epoch: 24 [3840/9756]	Loss: 0.1700	LR: 0.004000
Training Epoch: 24 [4096/9756]	Loss: 0.1482	LR: 0.004000
Training Epoch: 24 [4352/9756]	Loss: 0.1508	LR: 0.004000
Training Epoch: 24 [4608/9756]	Loss: 0.2000	LR: 0.004000
Training Epoch: 24 [4864/9756]	Loss: 0.2105	LR: 0.004000
Training Epoch: 24 [5120/9756]	Loss: 0.1969	LR: 0.004000
Training Epoch: 24 [5376/9756]	Loss: 0.2143	LR: 0.004000
Training Epoch: 24 [5632/9756]	Loss: 0.1455	LR: 0.004000
Training Epoch: 24 [5888/9756]	Loss: 0.1696	LR: 0.004000
Training Epoch: 24 [6144/9756]	Loss: 0.1591	LR: 0.004000
Training Epoch: 24 [6400/9756]	Loss: 0.1816	LR: 0.004000
Training Epoch: 24 [6656/9756]	Loss: 0.1816	LR: 0.004000
Training Epoch: 24 [6912/9756]	Loss: 0.2291	LR: 0.004000
Training Epoch: 24 [7168/9756]	Loss: 0.1496	LR: 0.004000
Training Epoch: 24 [7424/9756]	Loss: 0.1985	LR: 0.004000
Training Epoch: 24 [7680/9756]	Loss: 0.2432	LR: 0.004000
Training Epoch: 24 [7936/9756]	Loss: 0.1614	LR: 0.004000
Training Epoch: 24 [8192/9756]	Loss: 0.1685	LR: 0.004000
Training Epoch: 24 [8448/9756]	Loss: 0.1429	LR: 0.004000
Training Epoch: 24 [8704/9756]	Loss: 0.1946	LR: 0.004000
Training Epoch: 24 [8960/9756]	Loss: 0.1527	LR: 0.004000
Training Epoch: 24 [9216/9756]	Loss: 0.1883	LR: 0.004000
Training Epoch: 24 [9472/9756]	Loss: 0.1522	LR: 0.004000
Training Epoch: 24 [9728/9756]	Loss: 0.1530	LR: 0.004000
Training Epoch: 24 [9756/9756]	Loss: 0.5357	LR: 0.004000
Epoch 24 - Average Train Loss: 0.1769, Train Accuracy: 0.9279
Epoch 24 training time consumed: 141.12s
Evaluating Network.....
Test set: Epoch: 24, Average loss: 0.0010, Accuracy: 0.9128, Time consumed:8.12s
Training Epoch: 25 [256/9756]	Loss: 0.1627	LR: 0.004000
Training Epoch: 25 [512/9756]	Loss: 0.1604	LR: 0.004000
Training Epoch: 25 [768/9756]	Loss: 0.2632	LR: 0.004000
Training Epoch: 25 [1024/9756]	Loss: 0.2532	LR: 0.004000
Training Epoch: 25 [1280/9756]	Loss: 0.2258	LR: 0.004000
Training Epoch: 25 [1536/9756]	Loss: 0.2071	LR: 0.004000
Training Epoch: 25 [1792/9756]	Loss: 0.2039	LR: 0.004000
Training Epoch: 25 [2048/9756]	Loss: 0.2243	LR: 0.004000
Training Epoch: 25 [2304/9756]	Loss: 0.1505	LR: 0.004000
Training Epoch: 25 [2560/9756]	Loss: 0.2265	LR: 0.004000
Training Epoch: 25 [2816/9756]	Loss: 0.1707	LR: 0.004000
Training Epoch: 25 [3072/9756]	Loss: 0.1712	LR: 0.004000
Training Epoch: 25 [3328/9756]	Loss: 0.1587	LR: 0.004000
Training Epoch: 25 [3584/9756]	Loss: 0.2424	LR: 0.004000
Training Epoch: 25 [3840/9756]	Loss: 0.1872	LR: 0.004000
Training Epoch: 25 [4096/9756]	Loss: 0.1712	LR: 0.004000
Training Epoch: 25 [4352/9756]	Loss: 0.1303	LR: 0.004000
Training Epoch: 25 [4608/9756]	Loss: 0.1487	LR: 0.004000
Training Epoch: 25 [4864/9756]	Loss: 0.2158	LR: 0.004000
Training Epoch: 25 [5120/9756]	Loss: 0.1359	LR: 0.004000
Training Epoch: 25 [5376/9756]	Loss: 0.1658	LR: 0.004000
Training Epoch: 25 [5632/9756]	Loss: 0.1974	LR: 0.004000
Training Epoch: 25 [5888/9756]	Loss: 0.2208	LR: 0.004000
Training Epoch: 25 [6144/9756]	Loss: 0.1404	LR: 0.004000
Training Epoch: 25 [6400/9756]	Loss: 0.2255	LR: 0.004000
Training Epoch: 25 [6656/9756]	Loss: 0.2557	LR: 0.004000
Training Epoch: 25 [6912/9756]	Loss: 0.1967	LR: 0.004000
Training Epoch: 25 [7168/9756]	Loss: 0.1592	LR: 0.004000
Training Epoch: 25 [7424/9756]	Loss: 0.1623	LR: 0.004000
Training Epoch: 25 [7680/9756]	Loss: 0.1615	LR: 0.004000
Training Epoch: 25 [7936/9756]	Loss: 0.1938	LR: 0.004000
Training Epoch: 25 [8192/9756]	Loss: 0.2037	LR: 0.004000
Training Epoch: 25 [8448/9756]	Loss: 0.1403	LR: 0.004000
Training Epoch: 25 [8704/9756]	Loss: 0.2095	LR: 0.004000
Training Epoch: 25 [8960/9756]	Loss: 0.1545	LR: 0.004000
Training Epoch: 25 [9216/9756]	Loss: 0.1347	LR: 0.004000
Training Epoch: 25 [9472/9756]	Loss: 0.1462	LR: 0.004000
Training Epoch: 25 [9728/9756]	Loss: 0.1319	LR: 0.004000
Training Epoch: 25 [9756/9756]	Loss: 0.1778	LR: 0.004000
Epoch 25 - Average Train Loss: 0.1844, Train Accuracy: 0.9235
Epoch 25 training time consumed: 141.05s
Evaluating Network.....
Test set: Epoch: 25, Average loss: 0.0007, Accuracy: 0.9356, Time consumed:8.09s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_05h_07m_14s/ResNet18-MUCAC-seed6-ret25-25-best.pth
Training Epoch: 26 [256/9756]	Loss: 0.1617	LR: 0.004000
Training Epoch: 26 [512/9756]	Loss: 0.1699	LR: 0.004000
Training Epoch: 26 [768/9756]	Loss: 0.1389	LR: 0.004000
Training Epoch: 26 [1024/9756]	Loss: 0.1551	LR: 0.004000
Training Epoch: 26 [1280/9756]	Loss: 0.1066	LR: 0.004000
Training Epoch: 26 [1536/9756]	Loss: 0.1894	LR: 0.004000
Training Epoch: 26 [1792/9756]	Loss: 0.1599	LR: 0.004000
Training Epoch: 26 [2048/9756]	Loss: 0.1842	LR: 0.004000
Training Epoch: 26 [2304/9756]	Loss: 0.1496	LR: 0.004000
Training Epoch: 26 [2560/9756]	Loss: 0.1705	LR: 0.004000
Training Epoch: 26 [2816/9756]	Loss: 0.1322	LR: 0.004000
Training Epoch: 26 [3072/9756]	Loss: 0.2291	LR: 0.004000
Training Epoch: 26 [3328/9756]	Loss: 0.1582	LR: 0.004000
Training Epoch: 26 [3584/9756]	Loss: 0.2094	LR: 0.004000
Training Epoch: 26 [3840/9756]	Loss: 0.1857	LR: 0.004000
Training Epoch: 26 [4096/9756]	Loss: 0.1834	LR: 0.004000
Training Epoch: 26 [4352/9756]	Loss: 0.2310	LR: 0.004000
Training Epoch: 26 [4608/9756]	Loss: 0.1512	LR: 0.004000
Training Epoch: 26 [4864/9756]	Loss: 0.1985	LR: 0.004000
Training Epoch: 26 [5120/9756]	Loss: 0.1646	LR: 0.004000
Training Epoch: 26 [5376/9756]	Loss: 0.2040	LR: 0.004000
Training Epoch: 26 [5632/9756]	Loss: 0.1518	LR: 0.004000
Training Epoch: 26 [5888/9756]	Loss: 0.1468	LR: 0.004000
Training Epoch: 26 [6144/9756]	Loss: 0.1701	LR: 0.004000
Training Epoch: 26 [6400/9756]	Loss: 0.1850	LR: 0.004000
Training Epoch: 26 [6656/9756]	Loss: 0.1969	LR: 0.004000
Training Epoch: 26 [6912/9756]	Loss: 0.1563	LR: 0.004000
Training Epoch: 26 [7168/9756]	Loss: 0.1591	LR: 0.004000
Training Epoch: 26 [7424/9756]	Loss: 0.1977	LR: 0.004000
Training Epoch: 26 [7680/9756]	Loss: 0.1900	LR: 0.004000
Training Epoch: 26 [7936/9756]	Loss: 0.1982	LR: 0.004000
Training Epoch: 26 [8192/9756]	Loss: 0.1560	LR: 0.004000
Training Epoch: 26 [8448/9756]	Loss: 0.1895	LR: 0.004000
Training Epoch: 26 [8704/9756]	Loss: 0.1679	LR: 0.004000
Training Epoch: 26 [8960/9756]	Loss: 0.1769	LR: 0.004000
Training Epoch: 26 [9216/9756]	Loss: 0.2385	LR: 0.004000
Training Epoch: 26 [9472/9756]	Loss: 0.1617	LR: 0.004000
Training Epoch: 26 [9728/9756]	Loss: 0.1829	LR: 0.004000
Training Epoch: 26 [9756/9756]	Loss: 0.0909	LR: 0.004000
Epoch 26 - Average Train Loss: 0.1750, Train Accuracy: 0.9272
Epoch 26 training time consumed: 141.20s
Evaluating Network.....
Test set: Epoch: 26, Average loss: 0.0007, Accuracy: 0.9293, Time consumed:7.89s
Training Epoch: 27 [256/9756]	Loss: 0.1290	LR: 0.004000
Training Epoch: 27 [512/9756]	Loss: 0.1984	LR: 0.004000
Training Epoch: 27 [768/9756]	Loss: 0.1799	LR: 0.004000
Training Epoch: 27 [1024/9756]	Loss: 0.1738	LR: 0.004000
Training Epoch: 27 [1280/9756]	Loss: 0.1552	LR: 0.004000
Training Epoch: 27 [1536/9756]	Loss: 0.1879	LR: 0.004000
Training Epoch: 27 [1792/9756]	Loss: 0.2231	LR: 0.004000
Training Epoch: 27 [2048/9756]	Loss: 0.1644	LR: 0.004000
Training Epoch: 27 [2304/9756]	Loss: 0.1305	LR: 0.004000
Training Epoch: 27 [2560/9756]	Loss: 0.1626	LR: 0.004000
Training Epoch: 27 [2816/9756]	Loss: 0.1352	LR: 0.004000
Training Epoch: 27 [3072/9756]	Loss: 0.1378	LR: 0.004000
Training Epoch: 27 [3328/9756]	Loss: 0.1919	LR: 0.004000
Training Epoch: 27 [3584/9756]	Loss: 0.2284	LR: 0.004000
Training Epoch: 27 [3840/9756]	Loss: 0.2138	LR: 0.004000
Training Epoch: 27 [4096/9756]	Loss: 0.1698	LR: 0.004000
Training Epoch: 27 [4352/9756]	Loss: 0.2034	LR: 0.004000
Training Epoch: 27 [4608/9756]	Loss: 0.1655	LR: 0.004000
Training Epoch: 27 [4864/9756]	Loss: 0.1166	LR: 0.004000
Training Epoch: 27 [5120/9756]	Loss: 0.1875	LR: 0.004000
Training Epoch: 27 [5376/9756]	Loss: 0.1785	LR: 0.004000
Training Epoch: 27 [5632/9756]	Loss: 0.1753	LR: 0.004000
Training Epoch: 27 [5888/9756]	Loss: 0.2095	LR: 0.004000
Training Epoch: 27 [6144/9756]	Loss: 0.2181	LR: 0.004000
Training Epoch: 27 [6400/9756]	Loss: 0.1447	LR: 0.004000
Training Epoch: 27 [6656/9756]	Loss: 0.1251	LR: 0.004000
Training Epoch: 27 [6912/9756]	Loss: 0.1610	LR: 0.004000
Training Epoch: 27 [7168/9756]	Loss: 0.1239	LR: 0.004000
Training Epoch: 27 [7424/9756]	Loss: 0.1492	LR: 0.004000
Training Epoch: 27 [7680/9756]	Loss: 0.1520	LR: 0.004000
Training Epoch: 27 [7936/9756]	Loss: 0.1503	LR: 0.004000
Training Epoch: 27 [8192/9756]	Loss: 0.1499	LR: 0.004000
Training Epoch: 27 [8448/9756]	Loss: 0.1434	LR: 0.004000
Training Epoch: 27 [8704/9756]	Loss: 0.1711	LR: 0.004000
Training Epoch: 27 [8960/9756]	Loss: 0.2030	LR: 0.004000
Training Epoch: 27 [9216/9756]	Loss: 0.2077	LR: 0.004000
Training Epoch: 27 [9472/9756]	Loss: 0.2155	LR: 0.004000
Training Epoch: 27 [9728/9756]	Loss: 0.1701	LR: 0.004000
Training Epoch: 27 [9756/9756]	Loss: 0.2164	LR: 0.004000
Epoch 27 - Average Train Loss: 0.1713, Train Accuracy: 0.9284
Epoch 27 training time consumed: 141.42s
Evaluating Network.....
Test set: Epoch: 27, Average loss: 0.0007, Accuracy: 0.9327, Time consumed:8.11s
Training Epoch: 28 [256/9756]	Loss: 0.2104	LR: 0.004000
Training Epoch: 28 [512/9756]	Loss: 0.1501	LR: 0.004000
Training Epoch: 28 [768/9756]	Loss: 0.1193	LR: 0.004000
Training Epoch: 28 [1024/9756]	Loss: 0.1538	LR: 0.004000
Training Epoch: 28 [1280/9756]	Loss: 0.1962	LR: 0.004000
Training Epoch: 28 [1536/9756]	Loss: 0.1336	LR: 0.004000
Training Epoch: 28 [1792/9756]	Loss: 0.1787	LR: 0.004000
Training Epoch: 28 [2048/9756]	Loss: 0.1944	LR: 0.004000
Training Epoch: 28 [2304/9756]	Loss: 0.1590	LR: 0.004000
Training Epoch: 28 [2560/9756]	Loss: 0.2076	LR: 0.004000
Training Epoch: 28 [2816/9756]	Loss: 0.1792	LR: 0.004000
Training Epoch: 28 [3072/9756]	Loss: 0.1735	LR: 0.004000
Training Epoch: 28 [3328/9756]	Loss: 0.2049	LR: 0.004000
Training Epoch: 28 [3584/9756]	Loss: 0.1244	LR: 0.004000
Training Epoch: 28 [3840/9756]	Loss: 0.1526	LR: 0.004000
Training Epoch: 28 [4096/9756]	Loss: 0.2119	LR: 0.004000
Training Epoch: 28 [4352/9756]	Loss: 0.1668	LR: 0.004000
Training Epoch: 28 [4608/9756]	Loss: 0.1396	LR: 0.004000
Training Epoch: 28 [4864/9756]	Loss: 0.1476	LR: 0.004000
Training Epoch: 28 [5120/9756]	Loss: 0.1377	LR: 0.004000
Training Epoch: 28 [5376/9756]	Loss: 0.1796	LR: 0.004000
Training Epoch: 28 [5632/9756]	Loss: 0.1946	LR: 0.004000
Training Epoch: 28 [5888/9756]	Loss: 0.1721	LR: 0.004000
Training Epoch: 28 [6144/9756]	Loss: 0.1536	LR: 0.004000
Training Epoch: 28 [6400/9756]	Loss: 0.1430	LR: 0.004000
Training Epoch: 28 [6656/9756]	Loss: 0.1727	LR: 0.004000
Training Epoch: 28 [6912/9756]	Loss: 0.1378	LR: 0.004000
Training Epoch: 28 [7168/9756]	Loss: 0.1562	LR: 0.004000
Training Epoch: 28 [7424/9756]	Loss: 0.2067	LR: 0.004000
Training Epoch: 28 [7680/9756]	Loss: 0.1664	LR: 0.004000
Training Epoch: 28 [7936/9756]	Loss: 0.1606	LR: 0.004000
Training Epoch: 28 [8192/9756]	Loss: 0.2357	LR: 0.004000
Training Epoch: 28 [8448/9756]	Loss: 0.1641	LR: 0.004000
Training Epoch: 28 [8704/9756]	Loss: 0.1588	LR: 0.004000
Training Epoch: 28 [8960/9756]	Loss: 0.1998	LR: 0.004000
Training Epoch: 28 [9216/9756]	Loss: 0.1162	LR: 0.004000
Training Epoch: 28 [9472/9756]	Loss: 0.1697	LR: 0.004000
Training Epoch: 28 [9728/9756]	Loss: 0.1264	LR: 0.004000
Training Epoch: 28 [9756/9756]	Loss: 0.1741	LR: 0.004000
Epoch 28 - Average Train Loss: 0.1673, Train Accuracy: 0.9313
Epoch 28 training time consumed: 140.98s
Evaluating Network.....
Test set: Epoch: 28, Average loss: 0.0007, Accuracy: 0.9240, Time consumed:8.04s
Training Epoch: 29 [256/9756]	Loss: 0.1129	LR: 0.004000
Training Epoch: 29 [512/9756]	Loss: 0.1573	LR: 0.004000
Training Epoch: 29 [768/9756]	Loss: 0.1485	LR: 0.004000
Training Epoch: 29 [1024/9756]	Loss: 0.1947	LR: 0.004000
Training Epoch: 29 [1280/9756]	Loss: 0.1505	LR: 0.004000
Training Epoch: 29 [1536/9756]	Loss: 0.1314	LR: 0.004000
Training Epoch: 29 [1792/9756]	Loss: 0.1160	LR: 0.004000
Training Epoch: 29 [2048/9756]	Loss: 0.1317	LR: 0.004000
Training Epoch: 29 [2304/9756]	Loss: 0.1506	LR: 0.004000
Training Epoch: 29 [2560/9756]	Loss: 0.1844	LR: 0.004000
Training Epoch: 29 [2816/9756]	Loss: 0.1873	LR: 0.004000
Training Epoch: 29 [3072/9756]	Loss: 0.1783	LR: 0.004000
Training Epoch: 29 [3328/9756]	Loss: 0.2174	LR: 0.004000
Training Epoch: 29 [3584/9756]	Loss: 0.1898	LR: 0.004000
Training Epoch: 29 [3840/9756]	Loss: 0.2066	LR: 0.004000
Training Epoch: 29 [4096/9756]	Loss: 0.1756	LR: 0.004000
Training Epoch: 29 [4352/9756]	Loss: 0.1622	LR: 0.004000
Training Epoch: 29 [4608/9756]	Loss: 0.1652	LR: 0.004000
Training Epoch: 29 [4864/9756]	Loss: 0.1582	LR: 0.004000
Training Epoch: 29 [5120/9756]	Loss: 0.1792	LR: 0.004000
Training Epoch: 29 [5376/9756]	Loss: 0.1748	LR: 0.004000
Training Epoch: 29 [5632/9756]	Loss: 0.1046	LR: 0.004000
Training Epoch: 29 [5888/9756]	Loss: 0.1596	LR: 0.004000
Training Epoch: 29 [6144/9756]	Loss: 0.1731	LR: 0.004000
Training Epoch: 29 [6400/9756]	Loss: 0.1420	LR: 0.004000
Training Epoch: 29 [6656/9756]	Loss: 0.1773	LR: 0.004000
Training Epoch: 29 [6912/9756]	Loss: 0.1523	LR: 0.004000
Training Epoch: 29 [7168/9756]	Loss: 0.1457	LR: 0.004000
Training Epoch: 29 [7424/9756]	Loss: 0.1266	LR: 0.004000
Training Epoch: 29 [7680/9756]	Loss: 0.1445	LR: 0.004000
Training Epoch: 29 [7936/9756]	Loss: 0.1659	LR: 0.004000
Training Epoch: 29 [8192/9756]	Loss: 0.1944	LR: 0.004000
Training Epoch: 29 [8448/9756]	Loss: 0.1920	LR: 0.004000
Training Epoch: 29 [8704/9756]	Loss: 0.1737	LR: 0.004000
Training Epoch: 29 [8960/9756]	Loss: 0.1408	LR: 0.004000
Training Epoch: 29 [9216/9756]	Loss: 0.1382	LR: 0.004000
Training Epoch: 29 [9472/9756]	Loss: 0.1968	LR: 0.004000
Training Epoch: 29 [9728/9756]	Loss: 0.1412	LR: 0.004000
Training Epoch: 29 [9756/9756]	Loss: 0.4088	LR: 0.004000
Epoch 29 - Average Train Loss: 0.1623, Train Accuracy: 0.9328
Epoch 29 training time consumed: 140.86s
Evaluating Network.....
Test set: Epoch: 29, Average loss: 0.0007, Accuracy: 0.9337, Time consumed:8.09s
Training Epoch: 30 [256/9756]	Loss: 0.1875	LR: 0.004000
Training Epoch: 30 [512/9756]	Loss: 0.1361	LR: 0.004000
Training Epoch: 30 [768/9756]	Loss: 0.1170	LR: 0.004000
Training Epoch: 30 [1024/9756]	Loss: 0.1645	LR: 0.004000
Training Epoch: 30 [1280/9756]	Loss: 0.1276	LR: 0.004000
Training Epoch: 30 [1536/9756]	Loss: 0.1601	LR: 0.004000
Training Epoch: 30 [1792/9756]	Loss: 0.1631	LR: 0.004000
Training Epoch: 30 [2048/9756]	Loss: 0.1557	LR: 0.004000
Training Epoch: 30 [2304/9756]	Loss: 0.1594	LR: 0.004000
Training Epoch: 30 [2560/9756]	Loss: 0.1851	LR: 0.004000
Training Epoch: 30 [2816/9756]	Loss: 0.1894	LR: 0.004000
Training Epoch: 30 [3072/9756]	Loss: 0.1768	LR: 0.004000
Training Epoch: 30 [3328/9756]	Loss: 0.1655	LR: 0.004000
Training Epoch: 30 [3584/9756]	Loss: 0.1334	LR: 0.004000
Training Epoch: 30 [3840/9756]	Loss: 0.1353	LR: 0.004000
Training Epoch: 30 [4096/9756]	Loss: 0.2041	LR: 0.004000
Training Epoch: 30 [4352/9756]	Loss: 0.1722	LR: 0.004000
Training Epoch: 30 [4608/9756]	Loss: 0.1459	LR: 0.004000
Training Epoch: 30 [4864/9756]	Loss: 0.1385	LR: 0.004000
Training Epoch: 30 [5120/9756]	Loss: 0.2061	LR: 0.004000
Training Epoch: 30 [5376/9756]	Loss: 0.1992	LR: 0.004000
Training Epoch: 30 [5632/9756]	Loss: 0.2017	LR: 0.004000
Training Epoch: 30 [5888/9756]	Loss: 0.1359	LR: 0.004000
Training Epoch: 30 [6144/9756]	Loss: 0.1789	LR: 0.004000
Training Epoch: 30 [6400/9756]	Loss: 0.1799	LR: 0.004000
Training Epoch: 30 [6656/9756]	Loss: 0.1701	LR: 0.004000
Training Epoch: 30 [6912/9756]	Loss: 0.1652	LR: 0.004000
Training Epoch: 30 [7168/9756]	Loss: 0.1737	LR: 0.004000
Training Epoch: 30 [7424/9756]	Loss: 0.1449	LR: 0.004000
Training Epoch: 30 [7680/9756]	Loss: 0.1601	LR: 0.004000
Training Epoch: 30 [7936/9756]	Loss: 0.1686	LR: 0.004000
Training Epoch: 30 [8192/9756]	Loss: 0.1416	LR: 0.004000
Training Epoch: 30 [8448/9756]	Loss: 0.1707	LR: 0.004000
Training Epoch: 30 [8704/9756]	Loss: 0.1661	LR: 0.004000
Training Epoch: 30 [8960/9756]	Loss: 0.1692	LR: 0.004000
Training Epoch: 30 [9216/9756]	Loss: 0.1514	LR: 0.004000
Training Epoch: 30 [9472/9756]	Loss: 0.1616	LR: 0.004000
Training Epoch: 30 [9728/9756]	Loss: 0.1784	LR: 0.004000
Training Epoch: 30 [9756/9756]	Loss: 0.2157	LR: 0.004000
Epoch 30 - Average Train Loss: 0.1644, Train Accuracy: 0.9325
Epoch 30 training time consumed: 141.25s
Evaluating Network.....
Test set: Epoch: 30, Average loss: 0.0006, Accuracy: 0.9390, Time consumed:7.88s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_26_July_2025_05h_07m_14s/ResNet18-MUCAC-seed6-ret25-30-best.pth
Training Epoch: 31 [256/9756]	Loss: 0.1511	LR: 0.004000
Training Epoch: 31 [512/9756]	Loss: 0.1455	LR: 0.004000
Training Epoch: 31 [768/9756]	Loss: 0.1698	LR: 0.004000
Training Epoch: 31 [1024/9756]	Loss: 0.1546	LR: 0.004000
Training Epoch: 31 [1280/9756]	Loss: 0.1868	LR: 0.004000
Training Epoch: 31 [1536/9756]	Loss: 0.1737	LR: 0.004000
Training Epoch: 31 [1792/9756]	Loss: 0.2123	LR: 0.004000
Training Epoch: 31 [2048/9756]	Loss: 0.1485	LR: 0.004000
Training Epoch: 31 [2304/9756]	Loss: 0.1607	LR: 0.004000
Training Epoch: 31 [2560/9756]	Loss: 0.1853	LR: 0.004000
Training Epoch: 31 [2816/9756]	Loss: 0.1753	LR: 0.004000
Training Epoch: 31 [3072/9756]	Loss: 0.0994	LR: 0.004000
Training Epoch: 31 [3328/9756]	Loss: 0.2007	LR: 0.004000
Training Epoch: 31 [3584/9756]	Loss: 0.1785	LR: 0.004000
Training Epoch: 31 [3840/9756]	Loss: 0.1228	LR: 0.004000
Training Epoch: 31 [4096/9756]	Loss: 0.1599	LR: 0.004000
Training Epoch: 31 [4352/9756]	Loss: 0.2084	LR: 0.004000
Training Epoch: 31 [4608/9756]	Loss: 0.1438	LR: 0.004000
Training Epoch: 31 [4864/9756]	Loss: 0.1450	LR: 0.004000
Training Epoch: 31 [5120/9756]	Loss: 0.1347	LR: 0.004000
Training Epoch: 31 [5376/9756]	Loss: 0.1840	LR: 0.004000
Training Epoch: 31 [5632/9756]	Loss: 0.1710	LR: 0.004000
Training Epoch: 31 [5888/9756]	Loss: 0.1997	LR: 0.004000
Training Epoch: 31 [6144/9756]	Loss: 0.1208	LR: 0.004000
Training Epoch: 31 [6400/9756]	Loss: 0.1912	LR: 0.004000
Training Epoch: 31 [6656/9756]	Loss: 0.1743	LR: 0.004000
Training Epoch: 31 [6912/9756]	Loss: 0.1223	LR: 0.004000
Training Epoch: 31 [7168/9756]	Loss: 0.1230	LR: 0.004000
Training Epoch: 31 [7424/9756]	Loss: 0.1756	LR: 0.004000
Training Epoch: 31 [7680/9756]	Loss: 0.1053	LR: 0.004000
Training Epoch: 31 [7936/9756]	Loss: 0.1816	LR: 0.004000
Training Epoch: 31 [8192/9756]	Loss: 0.2166	LR: 0.004000
Training Epoch: 31 [8448/9756]	Loss: 0.1864	LR: 0.004000
Training Epoch: 31 [8704/9756]	Loss: 0.1200	LR: 0.004000
Training Epoch: 31 [8960/9756]	Loss: 0.1601	LR: 0.004000
Training Epoch: 31 [9216/9756]	Loss: 0.1706	LR: 0.004000
Training Epoch: 31 [9472/9756]	Loss: 0.1452	LR: 0.004000
Training Epoch: 31 [9728/9756]	Loss: 0.1374	LR: 0.004000
Training Epoch: 31 [9756/9756]	Loss: 0.2135	LR: 0.004000
Epoch 31 - Average Train Loss: 0.1618, Train Accuracy: 0.9331
Epoch 31 training time consumed: 140.81s
Evaluating Network.....
Test set: Epoch: 31, Average loss: 0.0009, Accuracy: 0.9196, Time consumed:8.01s
Valid (Test) Dl:  2065
Train Dl:  10548
Retain Train Dl:  9756
Forget Train Dl:  792
Retain Valid Dl:  9756
Forget Valid Dl:  792
retain_prob Distribution: 2065 samples
test_prob Distribution: 2065 samples
forget_prob Distribution: 792 samples
Set1 Distribution: 792 samples
Set2 Distribution: 792 samples
Set1 Distribution: 792 samples
Set2 Distribution: 792 samples
Set1 Distribution: 2065 samples
Set2 Distribution: 2065 samples
Set1 Distribution: 2065 samples
Set2 Distribution: 2065 samples
Test Accuracy: 92.18495178222656
Retain Accuracy: 91.96714782714844
Zero-Retain Forget (ZRF): 0.8086732029914856
Membership Inference Attack (MIA): 0.33585858585858586
Forget vs Retain Membership Inference Attack (MIA): 0.5078864353312302
Forget vs Test Membership Inference Attack (MIA): 0.580441640378549
Test vs Retain Membership Inference Attack (MIA): 0.562953995157385
Train vs Test Membership Inference Attack (MIA): 0.5205811138014528
Forget Set Accuracy (Df): 92.90364074707031
Method Execution Time: 5764.67 seconds
